questions
stringlengths 4
1.65k
| answers
stringlengths 1.73k
353k
| site
stringclasses 24
values | answers_cleaned
stringlengths 1.73k
353k
|
---|---|---|---|
helm title Troubleshooting Run If it shows your repository pointing to a URL you I am getting a warning about Unable to get an update from the stable chart repository weight 4 Troubleshooting | ---
title: "Troubleshooting"
weight: 4
---
## Troubleshooting
### I am getting a warning about "Unable to get an update from the "stable" chart repository"
Run `helm repo list`. If it shows your `stable` repository pointing to a `storage.googleapis.com` URL, you
will need to update that repository. On November 13, 2020, the Helm Charts repo [became unsupported](https://github.com/helm/charts#deprecation-timeline) after a year-long deprecation. An archive has been made available at
`https://charts.helm.sh/stable` but will no longer receive updates.
You can run the following command to fix your repository:
```console
$ helm repo add stable https://charts.helm.sh/stable --force-update
```
The same goes for the `incubator` repository, which has an archive available at https://charts.helm.sh/incubator.
You can run the following command to repair it:
```console
$ helm repo add incubator https://charts.helm.sh/incubator --force-update
```
### I am getting the warning 'WARNING: "kubernetes-charts.storage.googleapis.com" is deprecated for "stable" and will be deleted Nov. 13, 2020.'
The old Google helm chart repository has been replaced by a new Helm chart repository.
Run the following command to permanently fix this:
```console
$ helm repo add stable https://charts.helm.sh/stable --force-update
```
If you get a similar error for `incubator`, run this command:
```console
$ helm repo add incubator https://charts.helm.sh/incubator --force-update
```
### When I add a Helm repo, I get the error 'Error: Repo "https://kubernetes-charts.storage.googleapis.com" is no longer available'
The Helm Chart repositories are no longer supported after [a year-long deprecation period](https://github.com/helm/charts#deprecation-timeline).
Archives for these repositories are available at `https://charts.helm.sh/stable` and `https://charts.helm.sh/incubator`, however they will no longer receive updates. The command
`helm repo add` will not let you add the old URLs unless you specify `--use-deprecated-repos`.
### On GKE (Google Container Engine) I get "No SSH tunnels currently open"
```
Error: Error forwarding ports: error upgrading connection: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-[redacted]"?
```
Another variation of the error message is:
```
Unable to connect to the server: x509: certificate signed by unknown authority
```
The issue is that your local Kubernetes config file must have the correct
credentials.
When you create a cluster on GKE, it will give you credentials, including SSL
certificates and certificate authorities. These need to be stored in a
Kubernetes config file (Default: `~/.kube/config`) so that `kubectl` and `helm`
can access them.
### After migration from Helm 2, `helm list` shows only some (or none) of my releases
It is likely that you have missed the fact that Helm 3 now uses cluster
namespaces throughout to scope releases. This means that for all commands
referencing a release you must either:
* rely on the current namespace in the active kubernetes context (as described
by the `kubectl config view --minify` command),
* specify the correct namespace using the `--namespace`/`-n` flag, or
* for the `helm list` command, specify the `--all-namespaces`/`-A` flag
This applies to `helm ls`, `helm uninstall`, and all other `helm` commands
referencing a release.
### On macOS, the file `/etc/.mdns_debug` is accessed. Why?
We are aware of a case on macOS where Helm will try to access a file named
`/etc/.mdns_debug`. If the file exists, Helm holds the file handle open while it
executes.
This is caused by macOS's MDNS library. It attempts to load that file to read
debugging settings (if enabled). The file handle probably should not be held open, and
this issue has been reported to Apple. However, it is macOS, not Helm, that causes this
behavior.
If you do not want Helm to load this file, you may be able to compile Helm to as
a static library that does not use the host network stack. Doing so will inflate the
binary size of Helm, but will prevent the file from being open.
This issue was originally flagged as a potential security problem. But it has since
been determined that there is no flaw or vulnerability caused by this behavior.
### helm repo add fails when it used to work
In helm 3.3.1 and before, the command `helm repo add <reponame> <url>` will give
no output if you attempt to add a repo which already exists. The flag
`--no-update` would raise an error if the repo was already registered.
In helm 3.3.2 and beyond, an attempt to add an existing repo will error:
`Error: repository name (reponame) already exists, please specify a different name`
The default behavior is now reversed. `--no-update` is now ignored, while if you
want to replace (overwrite) an existing repo, you can use `--force-update`.
This is due to a breaking change for a security fix as explained in the [Helm
3.3.2 release notes](https://github.com/helm/helm/releases/tag/v3.3.2).
### Enabling Kubernetes client logging
Printing log messages for debugging the Kubernetes client can be enabled using
the [klog](https://pkg.go.dev/k8s.io/klog) flags. Using the `-v` flag to set
verbosity level will be enough for most cases.
For example:
```
helm list -v 6
```
### Tiller installations stopped working and access is denied
Helm releases used to be available from <https://storage.googleapis.com/kubernetes-helm/>. As explained in ["Announcing get.helm.sh"](https://helm.sh/blog/get-helm-sh/), the official location changed in June 2019. [GitHub Container Registry](https://github.com/orgs/helm/packages/container/package/tiller) makes all the old Tiller images available.
If you are trying to download older versions of Helm from the storage bucket you used in the past, you may find that they are missing:
```
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.</Details>
</Error>
```
The [legacy Tiller image location](https://gcr.io/kubernetes-helm/tiller) began the removal of images in August 2021. We have made these images available at the [GitHub Container Registry](https://github.com/orgs/helm/packages/container/package/tiller) location. For example, to download version v2.17.0, replace:
`https://storage.googleapis.com/kubernetes-helm/helm-v2.17.0-linux-amd64.tar.gz`
with:
`https://get.helm.sh/helm-v2.17.0-linux-amd64.tar.gz`
To initialize with Helm v2.17.0:
`helm init —upgrade`
Or if a different version is needed, use the --tiller-image flag to override the default location and install a specific Helm v2 version:
`helm init --tiller-image ghcr.io/helm/tiller:v2.16.9`
**Note:** The Helm maintainers recommend migration to a currently-supported version of Helm. Helm v2.17.0 was the final release of Helm v2; Helm v2 is unsupported since November 2020, as detailed in [Helm 2 and the Charts Project Are Now Unsupported](https://helm.sh/blog/helm-2-becomes-unsupported/). Many CVEs have been flagged against Helm since then, and those exploits are patched in Helm v3 but will never be patched in Helm v2. See the [current list of published Helm advisories](https://github.com/helm/helm/security/advisories?state=published) and make a plan to [migrate to Helm v3](https://helm.sh/docs/topics/v2_v3_migration/#helm) today. | helm | title Troubleshooting weight 4 Troubleshooting I am getting a warning about Unable to get an update from the stable chart repository Run helm repo list If it shows your stable repository pointing to a storage googleapis com URL you will need to update that repository On November 13 2020 the Helm Charts repo became unsupported https github com helm charts deprecation timeline after a year long deprecation An archive has been made available at https charts helm sh stable but will no longer receive updates You can run the following command to fix your repository console helm repo add stable https charts helm sh stable force update The same goes for the incubator repository which has an archive available at https charts helm sh incubator You can run the following command to repair it console helm repo add incubator https charts helm sh incubator force update I am getting the warning WARNING kubernetes charts storage googleapis com is deprecated for stable and will be deleted Nov 13 2020 The old Google helm chart repository has been replaced by a new Helm chart repository Run the following command to permanently fix this console helm repo add stable https charts helm sh stable force update If you get a similar error for incubator run this command console helm repo add incubator https charts helm sh incubator force update When I add a Helm repo I get the error Error Repo https kubernetes charts storage googleapis com is no longer available The Helm Chart repositories are no longer supported after a year long deprecation period https github com helm charts deprecation timeline Archives for these repositories are available at https charts helm sh stable and https charts helm sh incubator however they will no longer receive updates The command helm repo add will not let you add the old URLs unless you specify use deprecated repos On GKE Google Container Engine I get No SSH tunnels currently open Error Error forwarding ports error upgrading connection No SSH tunnels currently open Were the targets able to accept an ssh key for user gke redacted Another variation of the error message is Unable to connect to the server x509 certificate signed by unknown authority The issue is that your local Kubernetes config file must have the correct credentials When you create a cluster on GKE it will give you credentials including SSL certificates and certificate authorities These need to be stored in a Kubernetes config file Default kube config so that kubectl and helm can access them After migration from Helm 2 helm list shows only some or none of my releases It is likely that you have missed the fact that Helm 3 now uses cluster namespaces throughout to scope releases This means that for all commands referencing a release you must either rely on the current namespace in the active kubernetes context as described by the kubectl config view minify command specify the correct namespace using the namespace n flag or for the helm list command specify the all namespaces A flag This applies to helm ls helm uninstall and all other helm commands referencing a release On macOS the file etc mdns debug is accessed Why We are aware of a case on macOS where Helm will try to access a file named etc mdns debug If the file exists Helm holds the file handle open while it executes This is caused by macOS s MDNS library It attempts to load that file to read debugging settings if enabled The file handle probably should not be held open and this issue has been reported to Apple However it is macOS not Helm that causes this behavior If you do not want Helm to load this file you may be able to compile Helm to as a static library that does not use the host network stack Doing so will inflate the binary size of Helm but will prevent the file from being open This issue was originally flagged as a potential security problem But it has since been determined that there is no flaw or vulnerability caused by this behavior helm repo add fails when it used to work In helm 3 3 1 and before the command helm repo add reponame url will give no output if you attempt to add a repo which already exists The flag no update would raise an error if the repo was already registered In helm 3 3 2 and beyond an attempt to add an existing repo will error Error repository name reponame already exists please specify a different name The default behavior is now reversed no update is now ignored while if you want to replace overwrite an existing repo you can use force update This is due to a breaking change for a security fix as explained in the Helm 3 3 2 release notes https github com helm helm releases tag v3 3 2 Enabling Kubernetes client logging Printing log messages for debugging the Kubernetes client can be enabled using the klog https pkg go dev k8s io klog flags Using the v flag to set verbosity level will be enough for most cases For example helm list v 6 Tiller installations stopped working and access is denied Helm releases used to be available from https storage googleapis com kubernetes helm As explained in Announcing get helm sh https helm sh blog get helm sh the official location changed in June 2019 GitHub Container Registry https github com orgs helm packages container package tiller makes all the old Tiller images available If you are trying to download older versions of Helm from the storage bucket you used in the past you may find that they are missing Error Code AccessDenied Code Message Access denied Message Details Anonymous caller does not have storage objects get access to the Google Cloud Storage object Details Error The legacy Tiller image location https gcr io kubernetes helm tiller began the removal of images in August 2021 We have made these images available at the GitHub Container Registry https github com orgs helm packages container package tiller location For example to download version v2 17 0 replace https storage googleapis com kubernetes helm helm v2 17 0 linux amd64 tar gz with https get helm sh helm v2 17 0 linux amd64 tar gz To initialize with Helm v2 17 0 helm init upgrade Or if a different version is needed use the tiller image flag to override the default location and install a specific Helm v2 version helm init tiller image ghcr io helm tiller v2 16 9 Note The Helm maintainers recommend migration to a currently supported version of Helm Helm v2 17 0 was the final release of Helm v2 Helm v2 is unsupported since November 2020 as detailed in Helm 2 and the Charts Project Are Now Unsupported https helm sh blog helm 2 becomes unsupported Many CVEs have been flagged against Helm since then and those exploits are patched in Helm v3 but will never be patched in Helm v2 See the current list of published Helm advisories https github com helm helm security advisories state published and make a plan to migrate to Helm v3 https helm sh docs topics v2 v3 migration helm today |
helm Changes since Helm 2 Here s an exhaustive list of all the major changes introduced in Helm 3 Removal of Tiller title Changes Since Helm 2 weight 1 | ---
title: "Changes Since Helm 2"
weight: 1
---
## Changes since Helm 2
Here's an exhaustive list of all the major changes introduced in Helm 3.
### Removal of Tiller
During the Helm 2 development cycle, we introduced Tiller. Tiller played an
important role for teams working on a shared cluster - it made it possible for
multiple different operators to interact with the same set of releases.
With role-based access controls (RBAC) enabled by default in Kubernetes 1.6,
locking down Tiller for use in a production scenario became more difficult to
manage. Due to the vast number of possible security policies, our stance was to
provide a permissive default configuration. This allowed first-time users to
start experimenting with Helm and Kubernetes without having to dive headfirst
into the security controls. Unfortunately, this permissive configuration could
grant a user a broad range of permissions they weren’t intended to have. DevOps
and SREs had to learn additional operational steps when installing Tiller into a
multi-tenant cluster.
After hearing how community members were using Helm in certain scenarios, we
found that Tiller’s release management system did not need to rely upon an
in-cluster operator to maintain state or act as a central hub for Helm release
information. Instead, we could simply fetch information from the Kubernetes API
server, render the Charts client-side, and store a record of the installation in
Kubernetes.
Tiller’s primary goal could be accomplished without Tiller, so one of the first
decisions we made regarding Helm 3 was to completely remove Tiller.
With Tiller gone, the security model for Helm is radically simplified. Helm 3
now supports all the modern security, identity, and authorization features of
modern Kubernetes. Helm’s permissions are evaluated using your [kubeconfig
file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/).
Cluster administrators can restrict user permissions at whatever granularity
they see fit. Releases are still recorded in-cluster, and the rest of Helm’s
functionality remains.
### Improved Upgrade Strategy: 3-way Strategic Merge Patches
Helm 2 used a two-way strategic merge patch. During an upgrade, it compared the
most recent chart's manifest against the proposed chart's manifest (the one
supplied during `helm upgrade`). It compared the differences between these two
charts to determine what changes needed to be applied to the resources in
Kubernetes. If changes were applied to the cluster out-of-band (such as during a
`kubectl edit`), those changes were not considered. This resulted in resources
being unable to roll back to its previous state: because Helm only considered
the last applied chart's manifest as its current state, if there were no changes
in the chart's state, the live state was left unchanged.
In Helm 3, we now use a three-way strategic merge patch. Helm considers the old
manifest, its live state, and the new manifest when generating a patch.
#### Examples
Let's go through a few common examples what this change impacts.
##### Rolling back where live state has changed
Your team just deployed their application to production on Kubernetes using
Helm. The chart contains a Deployment object where the number of replicas is set
to three:
```console
$ helm install myapp ./myapp
```
A new developer joins the team. On their first day while observing the
production cluster, a horrible coffee-spilling-on-the-keyboard accident happens
and they `kubectl scale` the production deployment from three replicas down to
zero.
```console
$ kubectl scale --replicas=0 deployment/myapp
```
Another developer on your team notices that the production site is down and
decides to rollback the release to its previous state:
```console
$ helm rollback myapp
```
What happens?
In Helm 2, it would generate a patch, comparing the old manifest against the new
manifest. Because this is a rollback, it's the same manifest. Helm would
determine that there is nothing to change because there is no difference between
the old manifest and the new manifest. The replica count continues to stay at
zero. Panic ensues.
In Helm 3, the patch is generated using the old manifest, the live state, and
the new manifest. Helm recognizes that the old state was at three, the live
state is at zero and the new manifest wishes to change it back to three, so it
generates a patch to change the state back to three.
##### Upgrades where live state has changed
Many service meshes and other controller-based applications inject data into
Kubernetes objects. This can be something like a sidecar, labels, or other
information. Previously if you had the given manifest rendered from a Chart:
```yaml
containers:
- name: server
image: nginx:2.0.0
```
And the live state was modified by another application to
```yaml
containers:
- name: server
image: nginx:2.0.0
- name: my-injected-sidecar
image: my-cool-mesh:1.0.0
```
Now, you want to upgrade the `nginx` image tag to `2.1.0`. So, you upgrade to a
chart with the given manifest:
```yaml
containers:
- name: server
image: nginx:2.1.0
```
What happens?
In Helm 2, Helm generates a patch of the `containers` object between the old
manifest and the new manifest. The cluster's live state is not considered during
the patch generation.
The cluster's live state is modified to look like the following:
```yaml
containers:
- name: server
image: nginx:2.1.0
```
The sidecar pod is removed from live state. More panic ensues.
In Helm 3, Helm generates a patch of the `containers` object between the old
manifest, the live state, and the new manifest. It notices that the new manifest
changes the image tag to `2.1.0`, but live state contains a sidecar container.
The cluster's live state is modified to look like the following:
```yaml
containers:
- name: server
image: nginx:2.1.0
- name: my-injected-sidecar
image: my-cool-mesh:1.0.0
```
### Release Names are now scoped to the Namespace
With the removal of Tiller, the information about each release had to go
somewhere. In Helm 2, this was stored in the same namespace as Tiller. In
practice, this meant that once a name was used by a release, no other release
could use that same name, even if it was deployed in a different namespace.
In Helm 3, information about a particular release is now stored in the same
namespace as the release itself. This means that users can now `helm install
wordpress stable/wordpress` in two separate namespaces, and each can be referred
with `helm list` by changing the current namespace context (e.g. `helm list
--namespace foo`).
With this greater alignment to native cluster namespaces, the `helm list`
command no longer lists all releases by default. Instead, it will list only the
releases in the namespace of your current kubernetes context (i.e. the namespace
shown when you run `kubectl config view --minify`). It also means you must
supply the `--all-namespaces` flag to `helm list` to get behaviour similar to
Helm 2.
### Secrets as the default storage driver
In Helm 3, Secrets are now used as the [default storage
driver](/docs/topics/advanced/#storage-backends). Helm 2 used ConfigMaps by
default to store release information. In Helm 2.7.0, a new storage backend that
uses Secrets for storing release information was implemented, and it is now the
default starting in Helm 3.
Changing to Secrets as the Helm 3 default allows for additional security in
protecting charts in conjunction with the release of Secret encryption in
Kubernetes.
[Encrypting secrets at
rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) became
available as an alpha feature in Kubernetes 1.7 and became stable as of
Kubernetes 1.13. This allows users to encrypt Helm release metadata at rest, and
so it is a good starting point that can be expanded later into using something
like Vault.
### Go import path changes
In Helm 3, Helm switched the Go import path over from `k8s.io/helm` to
`helm.sh/helm/v3`. If you intend to upgrade to the Helm 3 Go client libraries,
make sure to change your import paths.
### Capabilities
The `.Capabilities` built-in object available during the rendering stage has
been simplified.
[Built-in Objects](/docs/chart_template_guide/builtin_objects/)
### Validating Chart Values with JSONSchema
A JSON Schema can now be imposed upon chart values. This ensures that values
provided by the user follow the schema laid out by the chart maintainer,
providing better error reporting when the user provides an incorrect set of
values for a chart.
Validation occurs when any of the following commands are invoked:
* `helm install`
* `helm upgrade`
* `helm template`
* `helm lint`
See the documentation on [Schema files](/docs/topics/charts#schema-files) for
more information.
### Consolidation of `requirements.yaml` into `Chart.yaml`
The Chart dependency management system moved from requirements.yaml and
requirements.lock to Chart.yaml and Chart.lock. We recommend that new charts
meant for Helm 3 use the new format. However, Helm 3 still understands Chart API
version 1 (`v1`) and will load existing `requirements.yaml` files
In Helm 2, this is how a `requirements.yaml` looked:
```yaml
dependencies:
- name: mariadb
version: 5.x.x
repository: https://charts.helm.sh/stable
condition: mariadb.enabled
tags:
- database
```
In Helm 3, the dependency is expressed the same way, but now from your
`Chart.yaml`:
```yaml
dependencies:
- name: mariadb
version: 5.x.x
repository: https://charts.helm.sh/stable
condition: mariadb.enabled
tags:
- database
```
Charts are still downloaded and placed in the `charts/` directory, so subcharts
vendored into the `charts/` directory will continue to work without
modification.
### Name (or --generate-name) is now required on install
In Helm 2, if no name was provided, an auto-generated name would be given. In
production, this proved to be more of a nuisance than a helpful feature. In Helm
3, Helm will throw an error if no name is provided with `helm install`.
For those who still wish to have a name auto-generated for you, you can use the
`--generate-name` flag to create one for you.
### Pushing Charts to OCI Registries
This is an experimental feature introduced in Helm 3. To use, set the
environment variable `HELM_EXPERIMENTAL_OCI=1`.
At a high level, a Chart Repository is a location where Charts can be stored and
shared. The Helm client packs and ships Helm Charts to a Chart Repository.
Simply put, a Chart Repository is a basic HTTP server that houses an index.yaml
file and some packaged charts.
While there are several benefits to the Chart Repository API meeting the most
basic storage requirements, a few drawbacks have started to show:
- Chart Repositories have a very hard time abstracting most of the security
implementations required in a production environment. Having a standard API
for authentication and authorization is very important in production
scenarios.
- Helm’s Chart provenance tools used for signing and verifying the integrity and
origin of a chart are an optional piece of the Chart publishing process.
- In multi-tenant scenarios, the same Chart can be uploaded by another tenant,
costing twice the storage cost to store the same content. Smarter chart
repositories have been designed to handle this, but it’s not a part of the
formal specification.
- Using a single index file for search, metadata information, and fetching
Charts has made it difficult or clunky to design around in secure multi-tenant
implementations.
Docker’s Distribution project (also known as Docker Registry v2) is the
successor to the Docker Registry project. Many major cloud vendors have a
product offering of the Distribution project, and with so many vendors offering
the same product, the Distribution project has benefited from many years of
hardening, security best practices, and battle-testing.
Please have a look at `helm help chart` and `helm help registry` for more
information on how to package a chart and push it to a Docker registry.
For more info, please see [this page](/docs/topics/registries/).
### Removal of `helm serve`
`helm serve` ran a local Chart Repository on your machine for development
purposes. However, it didn't receive much uptake as a development tool and had
numerous issues with its design. In the end, we decided to remove it and split
it out as a plugin.
For a similar experience to `helm serve`, have a look at the local filesystem
storage option in
[ChartMuseum](https://chartmuseum.com/docs/#using-with-local-filesystem-storage)
and the [servecm plugin](https://github.com/jdolitsky/helm-servecm).
### Library chart support
Helm 3 supports a class of chart called a “library chart”. This is a chart that
is shared by other charts, but does not create any release artifacts of its own.
A library chart’s templates can only declare `define` elements. Globally scoped
non-`define` content is simply ignored. This allows users to re-use and share
snippets of code that can be re-used across many charts, avoiding redundancy and
keeping charts [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself).
Library charts are declared in the dependencies directive in Chart.yaml, and are
installed and managed like any other chart.
```yaml
dependencies:
- name: mylib
version: 1.x.x
repository: quay.io
```
We’re very excited to see the use cases this feature opens up for chart
developers, as well as any best practices that arise from consuming library
charts.
### Chart.yaml apiVersion bump
With the introduction of library chart support and the consolidation of
requirements.yaml into Chart.yaml, clients that understood Helm 2's package
format won't understand these new features. So, we bumped the apiVersion in
Chart.yaml from `v1` to `v2`.
`helm create` now creates charts using this new format, so the default
apiVersion was bumped there as well.
Clients wishing to support both versions of Helm charts should inspect the
`apiVersion` field in Chart.yaml to understand how to parse the package format.
### XDG Base Directory Support
[The XDG Base Directory
Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html)
is a portable standard defining where configuration, data, and cached files
should be stored on the filesystem.
In Helm 2, Helm stored all this information in `~/.helm` (affectionately known
as `helm home`), which could be changed by setting the `$HELM_HOME` environment
variable, or by using the global flag `--home`.
In Helm 3, Helm now respects the following environment variables as per the XDG
Base Directory Specification:
- `$XDG_CACHE_HOME`
- `$XDG_CONFIG_HOME`
- `$XDG_DATA_HOME`
Helm plugins are still passed `$HELM_HOME` as an alias to `$XDG_DATA_HOME` for
backwards compatibility with plugins looking to use `$HELM_HOME` as a scratchpad
environment.
Several new environment variables are also passed in to the plugin's environment
to accommodate this change:
- `$HELM_PATH_CACHE` for the cache path
- `$HELM_PATH_CONFIG` for the config path
- `$HELM_PATH_DATA` for the data path
Helm plugins looking to support Helm 3 should consider using these new
environment variables instead.
### CLI Command Renames
In order to better align the verbiage from other package managers, `helm delete`
was re-named to `helm uninstall`. `helm delete` is still retained as an alias to
`helm uninstall`, so either form can be used.
In Helm 2, in order to purge the release ledger, the `--purge` flag had to be
provided. This functionality is now enabled by default. To retain the previous
behavior, use `helm uninstall --keep-history`.
Additionally, several other commands were re-named to accommodate the same
conventions:
- `helm inspect` -> `helm show`
- `helm fetch` -> `helm pull`
These commands have also retained their older verbs as aliases, so you can
continue to use them in either form.
### Automatically creating namespaces
When creating a release in a namespace that does not exist, Helm 2 created the
namespace. Helm 3 follows the behavior of other Kubernetes tooling and returns
an error if the namespace does not exist. Helm 3 will create the namespace if
you explicitly specify `--create-namespace` flag.
### What happened to .Chart.ApiVersion?
Helm follows the typical convention for CamelCasing which is to capitalize an
acronym. We have done this elsewhere in the code, such as with
`.Capabilities.APIVersions.Has`. In Helm v3, we corrected `.Chart.ApiVersion`
to follow this pattern, renaming it to `.Chart.APIVersion`.
| helm | title Changes Since Helm 2 weight 1 Changes since Helm 2 Here s an exhaustive list of all the major changes introduced in Helm 3 Removal of Tiller During the Helm 2 development cycle we introduced Tiller Tiller played an important role for teams working on a shared cluster it made it possible for multiple different operators to interact with the same set of releases With role based access controls RBAC enabled by default in Kubernetes 1 6 locking down Tiller for use in a production scenario became more difficult to manage Due to the vast number of possible security policies our stance was to provide a permissive default configuration This allowed first time users to start experimenting with Helm and Kubernetes without having to dive headfirst into the security controls Unfortunately this permissive configuration could grant a user a broad range of permissions they weren t intended to have DevOps and SREs had to learn additional operational steps when installing Tiller into a multi tenant cluster After hearing how community members were using Helm in certain scenarios we found that Tiller s release management system did not need to rely upon an in cluster operator to maintain state or act as a central hub for Helm release information Instead we could simply fetch information from the Kubernetes API server render the Charts client side and store a record of the installation in Kubernetes Tiller s primary goal could be accomplished without Tiller so one of the first decisions we made regarding Helm 3 was to completely remove Tiller With Tiller gone the security model for Helm is radically simplified Helm 3 now supports all the modern security identity and authorization features of modern Kubernetes Helm s permissions are evaluated using your kubeconfig file https kubernetes io docs concepts configuration organize cluster access kubeconfig Cluster administrators can restrict user permissions at whatever granularity they see fit Releases are still recorded in cluster and the rest of Helm s functionality remains Improved Upgrade Strategy 3 way Strategic Merge Patches Helm 2 used a two way strategic merge patch During an upgrade it compared the most recent chart s manifest against the proposed chart s manifest the one supplied during helm upgrade It compared the differences between these two charts to determine what changes needed to be applied to the resources in Kubernetes If changes were applied to the cluster out of band such as during a kubectl edit those changes were not considered This resulted in resources being unable to roll back to its previous state because Helm only considered the last applied chart s manifest as its current state if there were no changes in the chart s state the live state was left unchanged In Helm 3 we now use a three way strategic merge patch Helm considers the old manifest its live state and the new manifest when generating a patch Examples Let s go through a few common examples what this change impacts Rolling back where live state has changed Your team just deployed their application to production on Kubernetes using Helm The chart contains a Deployment object where the number of replicas is set to three console helm install myapp myapp A new developer joins the team On their first day while observing the production cluster a horrible coffee spilling on the keyboard accident happens and they kubectl scale the production deployment from three replicas down to zero console kubectl scale replicas 0 deployment myapp Another developer on your team notices that the production site is down and decides to rollback the release to its previous state console helm rollback myapp What happens In Helm 2 it would generate a patch comparing the old manifest against the new manifest Because this is a rollback it s the same manifest Helm would determine that there is nothing to change because there is no difference between the old manifest and the new manifest The replica count continues to stay at zero Panic ensues In Helm 3 the patch is generated using the old manifest the live state and the new manifest Helm recognizes that the old state was at three the live state is at zero and the new manifest wishes to change it back to three so it generates a patch to change the state back to three Upgrades where live state has changed Many service meshes and other controller based applications inject data into Kubernetes objects This can be something like a sidecar labels or other information Previously if you had the given manifest rendered from a Chart yaml containers name server image nginx 2 0 0 And the live state was modified by another application to yaml containers name server image nginx 2 0 0 name my injected sidecar image my cool mesh 1 0 0 Now you want to upgrade the nginx image tag to 2 1 0 So you upgrade to a chart with the given manifest yaml containers name server image nginx 2 1 0 What happens In Helm 2 Helm generates a patch of the containers object between the old manifest and the new manifest The cluster s live state is not considered during the patch generation The cluster s live state is modified to look like the following yaml containers name server image nginx 2 1 0 The sidecar pod is removed from live state More panic ensues In Helm 3 Helm generates a patch of the containers object between the old manifest the live state and the new manifest It notices that the new manifest changes the image tag to 2 1 0 but live state contains a sidecar container The cluster s live state is modified to look like the following yaml containers name server image nginx 2 1 0 name my injected sidecar image my cool mesh 1 0 0 Release Names are now scoped to the Namespace With the removal of Tiller the information about each release had to go somewhere In Helm 2 this was stored in the same namespace as Tiller In practice this meant that once a name was used by a release no other release could use that same name even if it was deployed in a different namespace In Helm 3 information about a particular release is now stored in the same namespace as the release itself This means that users can now helm install wordpress stable wordpress in two separate namespaces and each can be referred with helm list by changing the current namespace context e g helm list namespace foo With this greater alignment to native cluster namespaces the helm list command no longer lists all releases by default Instead it will list only the releases in the namespace of your current kubernetes context i e the namespace shown when you run kubectl config view minify It also means you must supply the all namespaces flag to helm list to get behaviour similar to Helm 2 Secrets as the default storage driver In Helm 3 Secrets are now used as the default storage driver docs topics advanced storage backends Helm 2 used ConfigMaps by default to store release information In Helm 2 7 0 a new storage backend that uses Secrets for storing release information was implemented and it is now the default starting in Helm 3 Changing to Secrets as the Helm 3 default allows for additional security in protecting charts in conjunction with the release of Secret encryption in Kubernetes Encrypting secrets at rest https kubernetes io docs tasks administer cluster encrypt data became available as an alpha feature in Kubernetes 1 7 and became stable as of Kubernetes 1 13 This allows users to encrypt Helm release metadata at rest and so it is a good starting point that can be expanded later into using something like Vault Go import path changes In Helm 3 Helm switched the Go import path over from k8s io helm to helm sh helm v3 If you intend to upgrade to the Helm 3 Go client libraries make sure to change your import paths Capabilities The Capabilities built in object available during the rendering stage has been simplified Built in Objects docs chart template guide builtin objects Validating Chart Values with JSONSchema A JSON Schema can now be imposed upon chart values This ensures that values provided by the user follow the schema laid out by the chart maintainer providing better error reporting when the user provides an incorrect set of values for a chart Validation occurs when any of the following commands are invoked helm install helm upgrade helm template helm lint See the documentation on Schema files docs topics charts schema files for more information Consolidation of requirements yaml into Chart yaml The Chart dependency management system moved from requirements yaml and requirements lock to Chart yaml and Chart lock We recommend that new charts meant for Helm 3 use the new format However Helm 3 still understands Chart API version 1 v1 and will load existing requirements yaml files In Helm 2 this is how a requirements yaml looked yaml dependencies name mariadb version 5 x x repository https charts helm sh stable condition mariadb enabled tags database In Helm 3 the dependency is expressed the same way but now from your Chart yaml yaml dependencies name mariadb version 5 x x repository https charts helm sh stable condition mariadb enabled tags database Charts are still downloaded and placed in the charts directory so subcharts vendored into the charts directory will continue to work without modification Name or generate name is now required on install In Helm 2 if no name was provided an auto generated name would be given In production this proved to be more of a nuisance than a helpful feature In Helm 3 Helm will throw an error if no name is provided with helm install For those who still wish to have a name auto generated for you you can use the generate name flag to create one for you Pushing Charts to OCI Registries This is an experimental feature introduced in Helm 3 To use set the environment variable HELM EXPERIMENTAL OCI 1 At a high level a Chart Repository is a location where Charts can be stored and shared The Helm client packs and ships Helm Charts to a Chart Repository Simply put a Chart Repository is a basic HTTP server that houses an index yaml file and some packaged charts While there are several benefits to the Chart Repository API meeting the most basic storage requirements a few drawbacks have started to show Chart Repositories have a very hard time abstracting most of the security implementations required in a production environment Having a standard API for authentication and authorization is very important in production scenarios Helm s Chart provenance tools used for signing and verifying the integrity and origin of a chart are an optional piece of the Chart publishing process In multi tenant scenarios the same Chart can be uploaded by another tenant costing twice the storage cost to store the same content Smarter chart repositories have been designed to handle this but it s not a part of the formal specification Using a single index file for search metadata information and fetching Charts has made it difficult or clunky to design around in secure multi tenant implementations Docker s Distribution project also known as Docker Registry v2 is the successor to the Docker Registry project Many major cloud vendors have a product offering of the Distribution project and with so many vendors offering the same product the Distribution project has benefited from many years of hardening security best practices and battle testing Please have a look at helm help chart and helm help registry for more information on how to package a chart and push it to a Docker registry For more info please see this page docs topics registries Removal of helm serve helm serve ran a local Chart Repository on your machine for development purposes However it didn t receive much uptake as a development tool and had numerous issues with its design In the end we decided to remove it and split it out as a plugin For a similar experience to helm serve have a look at the local filesystem storage option in ChartMuseum https chartmuseum com docs using with local filesystem storage and the servecm plugin https github com jdolitsky helm servecm Library chart support Helm 3 supports a class of chart called a library chart This is a chart that is shared by other charts but does not create any release artifacts of its own A library chart s templates can only declare define elements Globally scoped non define content is simply ignored This allows users to re use and share snippets of code that can be re used across many charts avoiding redundancy and keeping charts DRY https en wikipedia org wiki Don 27t repeat yourself Library charts are declared in the dependencies directive in Chart yaml and are installed and managed like any other chart yaml dependencies name mylib version 1 x x repository quay io We re very excited to see the use cases this feature opens up for chart developers as well as any best practices that arise from consuming library charts Chart yaml apiVersion bump With the introduction of library chart support and the consolidation of requirements yaml into Chart yaml clients that understood Helm 2 s package format won t understand these new features So we bumped the apiVersion in Chart yaml from v1 to v2 helm create now creates charts using this new format so the default apiVersion was bumped there as well Clients wishing to support both versions of Helm charts should inspect the apiVersion field in Chart yaml to understand how to parse the package format XDG Base Directory Support The XDG Base Directory Specification https specifications freedesktop org basedir spec basedir spec latest html is a portable standard defining where configuration data and cached files should be stored on the filesystem In Helm 2 Helm stored all this information in helm affectionately known as helm home which could be changed by setting the HELM HOME environment variable or by using the global flag home In Helm 3 Helm now respects the following environment variables as per the XDG Base Directory Specification XDG CACHE HOME XDG CONFIG HOME XDG DATA HOME Helm plugins are still passed HELM HOME as an alias to XDG DATA HOME for backwards compatibility with plugins looking to use HELM HOME as a scratchpad environment Several new environment variables are also passed in to the plugin s environment to accommodate this change HELM PATH CACHE for the cache path HELM PATH CONFIG for the config path HELM PATH DATA for the data path Helm plugins looking to support Helm 3 should consider using these new environment variables instead CLI Command Renames In order to better align the verbiage from other package managers helm delete was re named to helm uninstall helm delete is still retained as an alias to helm uninstall so either form can be used In Helm 2 in order to purge the release ledger the purge flag had to be provided This functionality is now enabled by default To retain the previous behavior use helm uninstall keep history Additionally several other commands were re named to accommodate the same conventions helm inspect helm show helm fetch helm pull These commands have also retained their older verbs as aliases so you can continue to use them in either form Automatically creating namespaces When creating a release in a namespace that does not exist Helm 2 created the namespace Helm 3 follows the behavior of other Kubernetes tooling and returns an error if the namespace does not exist Helm 3 will create the namespace if you explicitly specify create namespace flag What happened to Chart ApiVersion Helm follows the typical convention for CamelCasing which is to capitalize an acronym We have done this elsewhere in the code such as with Capabilities APIVersions Has In Helm v3 we corrected Chart ApiVersion to follow this pattern renaming it to Chart APIVersion |
helm weight 7 title Flow Control template author with the ability to control the flow of a template s generation Helm s template language provides the following control structures Control structures called actions in template parlance provide you the A quick overview on the flow structure within templates | ---
title: "Flow Control"
description: "A quick overview on the flow structure within templates."
weight: 7
---
Control structures (called "actions" in template parlance) provide you, the
template author, with the ability to control the flow of a template's
generation. Helm's template language provides the following control structures:
- `if`/`else` for creating conditional blocks
- `with` to specify a scope
- `range`, which provides a "for each"-style loop
In addition to these, it provides a few actions for declaring and using named
template segments:
- `define` declares a new named template inside of your template
- `template` imports a named template
- `block` declares a special kind of fillable template area
In this section, we'll talk about `if`, `with`, and `range`. The others are
covered in the "Named Templates" section later in this guide.
## If/Else
The first control structure we'll look at is for conditionally including blocks
of text in a template. This is the `if`/`else` block.
The basic structure for a conditional looks like this:
```
# Do something
# Do something else
# Default case
```
Notice that we're now talking about _pipelines_ instead of values. The reason
for this is to make it clear that control structures can execute an entire
pipeline, not just evaluate a value.
A pipeline is evaluated as _false_ if the value is:
- a boolean false
- a numeric zero
- an empty string
- a `nil` (empty or null)
- an empty collection (`map`, `slice`, `tuple`, `dict`, `array`)
Under all other conditions, the condition is true.
Let's add a simple conditional to our ConfigMap. We'll add another setting if
the drink is set to coffee:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
mug: "true"
```
Since we commented out `drink: coffee` in our last example, the output should
not include a `mug: "true"` flag. But if we add that line back into our
`values.yaml` file, the output should look like this:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: eyewitness-elk-configmap
data:
myvalue: "Hello World"
drink: "coffee"
food: "PIZZA"
mug: "true"
```
## Controlling Whitespace
While we're looking at conditionals, we should take a quick look at the way
whitespace is controlled in templates. Let's take the previous example and
format it to be a little easier to read:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
mug: "true"
```
Initially, this looks good. But if we run it through the template engine, we'll
get an unfortunate result:
```console
$ helm install --dry-run --debug ./mychart
SERVER: "localhost:44134"
CHART PATH: /Users/mattbutcher/Code/Go/src/helm.sh/helm/_scratch/mychart
Error: YAML parse error on mychart/templates/configmap.yaml: error converting YAML to JSON: yaml: line 9: did not find expected key
```
What happened? We generated incorrect YAML because of the whitespacing above.
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: eyewitness-elk-configmap
data:
myvalue: "Hello World"
drink: "coffee"
food: "PIZZA"
mug: "true"
```
`mug` is incorrectly indented. Let's simply out-dent that one line, and re-run:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
mug: "true"
```
When we sent that, we'll get YAML that is valid, but still looks a little funny:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: telling-chimp-configmap
data:
myvalue: "Hello World"
drink: "coffee"
food: "PIZZA"
mug: "true"
```
Notice that we received a few empty lines in our YAML. Why? When the template
engine runs, it _removes_ the contents inside of ``, but it leaves
the remaining whitespace exactly as is.
YAML ascribes meaning to whitespace, so managing the whitespace becomes pretty
important. Fortunately, Helm templates have a few tools to help.
First, the curly brace syntax of template declarations can be modified with
special characters to tell the template engine to chomp whitespace. `` means whitespace to the right should be consumed. _Be careful!
Newlines are whitespace!_
> Make sure there is a space between the `-` and the rest of your directive.
> `` means "trim left whitespace and print 3" while `` means
> "print -3".
Using this syntax, we can modify our template to get rid of those new lines:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
mug: "true"
```
Just for the sake of making this point clear, let's adjust the above, and
substitute an `*` for each whitespace that will be deleted following this rule.
An `*` at the end of the line indicates a newline character that would be
removed
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food: *
**
mug: "true"*
**
```
Keeping that in mind, we can run our template through Helm and see the result:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: clunky-cat-configmap
data:
myvalue: "Hello World"
drink: "coffee"
food: "PIZZA"
mug: "true"
```
Be careful with the chomping modifiers. It is easy to accidentally do things
like this:
```yaml
food:
mug: "true"
```
That will produce `food: "PIZZA"mug: "true"` because it consumed newlines on both
sides.
> For the details on whitespace control in templates, see the [Official Go
> template documentation](https://godoc.org/text/template)
Finally, sometimes it's easier to tell the template system how to indent for you
instead of trying to master the spacing of template directives. For that reason,
you may sometimes find it useful to use the `indent` function (``).
## Modifying scope using `with`
The next control structure to look at is the `with` action. This controls
variable scoping. Recall that `.` is a reference to _the current scope_. So
`.Values` tells the template to find the `Values` object in the current scope.
The syntax for `with` is similar to a simple `if` statement:
```
# restricted scope
```
Scopes can be changed. `with` can allow you to set the current scope (`.`) to a
particular object. For example, we've been working with `.Values.favorite`.
Let's rewrite our ConfigMap to alter the `.` scope to point to
`.Values.favorite`:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
```
Note that we removed the `if` conditional from the previous exercise
because it is now unnecessary - the block after `with` only executes
if the value of `PIPELINE` is not empty.
Notice that now we can reference `.drink` and `.food` without qualifying them.
That is because the `with` statement sets `.` to point to `.Values.favorite`.
The `.` is reset to its previous scope after ``.
But here's a note of caution! Inside of the restricted scope, you will not be
able to access the other objects from the parent scope using `.`. This, for
example, will fail:
```yaml
drink:
food:
release:
```
It will produce an error because `Release.Name` is not inside of the restricted
scope for `.`. However, if we swap the last two lines, all will work as expected
because the scope is reset after ``.
```yaml
drink:
food:
release:
```
Or, we can use `$` for accessing the object `Release.Name` from the parent
scope. `$` is mapped to the root scope when template execution begins and it
does not change during template execution. The following would work as well:
```yaml
drink:
food:
release:
```
After looking at `range`, we will take a look at template variables, which offer
one solution to the scoping issue above.
## Looping with the `range` action
Many programming languages have support for looping using `for` loops, `foreach`
loops, or similar functional mechanisms. In Helm's template language, the way to
iterate through a collection is to use the `range` operator.
To start, let's add a list of pizza toppings to our `values.yaml` file:
```yaml
favorite:
drink: coffee
food: pizza
pizzaToppings:
- mushrooms
- cheese
- peppers
- onions
```
Now we have a list (called a `slice` in templates) of `pizzaToppings`. We can
modify our template to print this list into our ConfigMap:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
toppings: |-
-
```
We can use `$` for accessing the list `Values.pizzaToppings` from the parent
scope. `$` is mapped to the root scope when template execution begins and it
does not change during template execution. The following would work as well:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
toppings: |-
-
```
Let's take a closer look at the `toppings:` list. The `range` function will
"range over" (iterate through) the `pizzaToppings` list. But now something
interesting happens. Just like `with` sets the scope of `.`, so does a `range`
operator. Each time through the loop, `.` is set to the current pizza topping.
That is, the first time, `.` is set to `mushrooms`. The second iteration it is
set to `cheese`, and so on.
We can send the value of `.` directly down a pipeline, so when we do ``, it sends `.` to `title` (title case function) and then to
`quote`. If we run this template, the output will be:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: edgy-dragonfly-configmap
data:
myvalue: "Hello World"
drink: "coffee"
food: "PIZZA"
toppings: |-
- "Mushrooms"
- "Cheese"
- "Peppers"
- "Onions"
```
Now, in this example we've done something tricky. The `toppings: |-` line is
declaring a multi-line string. So our list of toppings is actually not a YAML
list. It's a big string. Why would we do this? Because the data in ConfigMaps
`data` is composed of key/value pairs, where both the key and the value are
simple strings. To understand why this is the case, take a look at the
[Kubernetes ConfigMap docs](https://kubernetes.io/docs/concepts/configuration/configmap/).
For us, though, this detail doesn't matter much.
> The `|-` marker in YAML takes a multi-line string. This can be a useful
> technique for embedding big blocks of data inside of your manifests, as
> exemplified here.
Sometimes it's useful to be able to quickly make a list inside of your template,
and then iterate over that list. Helm templates have a function to make this
easy: `tuple`. In computer science, a tuple is a list-like collection of fixed
size, but with arbitrary data types. This roughly conveys the way a `tuple` is
used.
```yaml
sizes: |-
-
```
The above will produce this:
```yaml
sizes: |-
- small
- medium
- large
```
In addition to lists and tuples, `range` can be used to iterate over collections
that have a key and a value (like a `map` or `dict`). We'll see how to do that
in the next section when we introduce template variables. | helm | title Flow Control description A quick overview on the flow structure within templates weight 7 Control structures called actions in template parlance provide you the template author with the ability to control the flow of a template s generation Helm s template language provides the following control structures if else for creating conditional blocks with to specify a scope range which provides a for each style loop In addition to these it provides a few actions for declaring and using named template segments define declares a new named template inside of your template template imports a named template block declares a special kind of fillable template area In this section we ll talk about if with and range The others are covered in the Named Templates section later in this guide If Else The first control structure we ll look at is for conditionally including blocks of text in a template This is the if else block The basic structure for a conditional looks like this Do something Do something else Default case Notice that we re now talking about pipelines instead of values The reason for this is to make it clear that control structures can execute an entire pipeline not just evaluate a value A pipeline is evaluated as false if the value is a boolean false a numeric zero an empty string a nil empty or null an empty collection map slice tuple dict array Under all other conditions the condition is true Let s add a simple conditional to our ConfigMap We ll add another setting if the drink is set to coffee yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food mug true Since we commented out drink coffee in our last example the output should not include a mug true flag But if we add that line back into our values yaml file the output should look like this yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name eyewitness elk configmap data myvalue Hello World drink coffee food PIZZA mug true Controlling Whitespace While we re looking at conditionals we should take a quick look at the way whitespace is controlled in templates Let s take the previous example and format it to be a little easier to read yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food mug true Initially this looks good But if we run it through the template engine we ll get an unfortunate result console helm install dry run debug mychart SERVER localhost 44134 CHART PATH Users mattbutcher Code Go src helm sh helm scratch mychart Error YAML parse error on mychart templates configmap yaml error converting YAML to JSON yaml line 9 did not find expected key What happened We generated incorrect YAML because of the whitespacing above yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name eyewitness elk configmap data myvalue Hello World drink coffee food PIZZA mug true mug is incorrectly indented Let s simply out dent that one line and re run yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food mug true When we sent that we ll get YAML that is valid but still looks a little funny yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name telling chimp configmap data myvalue Hello World drink coffee food PIZZA mug true Notice that we received a few empty lines in our YAML Why When the template engine runs it removes the contents inside of but it leaves the remaining whitespace exactly as is YAML ascribes meaning to whitespace so managing the whitespace becomes pretty important Fortunately Helm templates have a few tools to help First the curly brace syntax of template declarations can be modified with special characters to tell the template engine to chomp whitespace means whitespace to the right should be consumed Be careful Newlines are whitespace Make sure there is a space between the and the rest of your directive means trim left whitespace and print 3 while means print 3 Using this syntax we can modify our template to get rid of those new lines yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food mug true Just for the sake of making this point clear let s adjust the above and substitute an for each whitespace that will be deleted following this rule An at the end of the line indicates a newline character that would be removed yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food mug true Keeping that in mind we can run our template through Helm and see the result yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name clunky cat configmap data myvalue Hello World drink coffee food PIZZA mug true Be careful with the chomping modifiers It is easy to accidentally do things like this yaml food mug true That will produce food PIZZA mug true because it consumed newlines on both sides For the details on whitespace control in templates see the Official Go template documentation https godoc org text template Finally sometimes it s easier to tell the template system how to indent for you instead of trying to master the spacing of template directives For that reason you may sometimes find it useful to use the indent function Modifying scope using with The next control structure to look at is the with action This controls variable scoping Recall that is a reference to the current scope So Values tells the template to find the Values object in the current scope The syntax for with is similar to a simple if statement restricted scope Scopes can be changed with can allow you to set the current scope to a particular object For example we ve been working with Values favorite Let s rewrite our ConfigMap to alter the scope to point to Values favorite yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food Note that we removed the if conditional from the previous exercise because it is now unnecessary the block after with only executes if the value of PIPELINE is not empty Notice that now we can reference drink and food without qualifying them That is because the with statement sets to point to Values favorite The is reset to its previous scope after But here s a note of caution Inside of the restricted scope you will not be able to access the other objects from the parent scope using This for example will fail yaml drink food release It will produce an error because Release Name is not inside of the restricted scope for However if we swap the last two lines all will work as expected because the scope is reset after yaml drink food release Or we can use for accessing the object Release Name from the parent scope is mapped to the root scope when template execution begins and it does not change during template execution The following would work as well yaml drink food release After looking at range we will take a look at template variables which offer one solution to the scoping issue above Looping with the range action Many programming languages have support for looping using for loops foreach loops or similar functional mechanisms In Helm s template language the way to iterate through a collection is to use the range operator To start let s add a list of pizza toppings to our values yaml file yaml favorite drink coffee food pizza pizzaToppings mushrooms cheese peppers onions Now we have a list called a slice in templates of pizzaToppings We can modify our template to print this list into our ConfigMap yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food toppings We can use for accessing the list Values pizzaToppings from the parent scope is mapped to the root scope when template execution begins and it does not change during template execution The following would work as well yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food toppings Let s take a closer look at the toppings list The range function will range over iterate through the pizzaToppings list But now something interesting happens Just like with sets the scope of so does a range operator Each time through the loop is set to the current pizza topping That is the first time is set to mushrooms The second iteration it is set to cheese and so on We can send the value of directly down a pipeline so when we do it sends to title title case function and then to quote If we run this template the output will be yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name edgy dragonfly configmap data myvalue Hello World drink coffee food PIZZA toppings Mushrooms Cheese Peppers Onions Now in this example we ve done something tricky The toppings line is declaring a multi line string So our list of toppings is actually not a YAML list It s a big string Why would we do this Because the data in ConfigMaps data is composed of key value pairs where both the key and the value are simple strings To understand why this is the case take a look at the Kubernetes ConfigMap docs https kubernetes io docs concepts configuration configmap For us though this detail doesn t matter much The marker in YAML takes a multi line string This can be a useful technique for embedding big blocks of data inside of your manifests as exemplified here Sometimes it s useful to be able to quickly make a list inside of your template and then iterate over that list Helm templates have a function to make this easy tuple In computer science a tuple is a list like collection of fixed size but with arbitrary data types This roughly conveys the way a tuple is used yaml sizes The above will produce this yaml sizes small medium large In addition to lists and tuples range can be used to iterate over collections that have a key and a value like a map or dict We ll see how to do that in the next section when we introduce template variables |
helm Using functions in templates weight 5 So far we ve seen how to place information into a template But that title Template Functions and Pipelines information is placed into the template unmodified Sometimes we want to transform the supplied data in a way that makes it more useable to us | ---
title: "Template Functions and Pipelines"
description: "Using functions in templates."
weight: 5
---
So far, we've seen how to place information into a template. But that
information is placed into the template unmodified. Sometimes we want to
transform the supplied data in a way that makes it more useable to us.
Let's start with a best practice: When injecting strings from the `.Values`
object into the template, we ought to quote these strings. We can do that by
calling the `quote` function in the template directive:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
```
Template functions follow the syntax `functionName arg1 arg2...`. In the snippet
above, `quote .Values.favorite.drink` calls the `quote` function and passes it a
single argument.
Helm has over 60 available functions. Some of them are defined by the [Go
template language](https://godoc.org/text/template) itself. Most of the others
are part of the [Sprig template library](https://masterminds.github.io/sprig/).
We'll see many of them as we progress through the examples.
> While we talk about the "Helm template language" as if it is Helm-specific, it
> is actually a combination of the Go template language, some extra functions,
> and a variety of wrappers to expose certain objects to the templates. Many
> resources on Go templates may be helpful as you learn about templating.
## Pipelines
One of the powerful features of the template language is its concept of
_pipelines_. Drawing on a concept from UNIX, pipelines are a tool for chaining
together a series of template commands to compactly express a series of
transformations. In other words, pipelines are an efficient way of getting
several things done in sequence. Let's rewrite the above example using a
pipeline.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
```
In this example, instead of calling `quote ARGUMENT`, we inverted the order. We
"sent" the argument to the function using a pipeline (`|`):
`.Values.favorite.drink | quote`. Using pipelines, we can chain several
functions together:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
```
> Inverting the order is a common practice in templates. You will see `.val |
> quote` more often than `quote .val`. Either practice is fine.
When evaluated, that template will produce this:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: trendsetting-p-configmap
data:
myvalue: "Hello World"
drink: "coffee"
food: "PIZZA"
```
Note that our original `pizza` has now been transformed to `"PIZZA"`.
When pipelining arguments like this, the result of the first evaluation
(`.Values.favorite.drink`) is sent as the _last argument to the function_. We
can modify the drink example above to illustrate with a function that takes two
arguments: `repeat COUNT STRING`:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
```
The `repeat` function will echo the given string the given number of times, so
we will get this for output:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: melting-porcup-configmap
data:
myvalue: "Hello World"
drink: "coffeecoffeecoffeecoffeecoffee"
food: "PIZZA"
```
## Using the `default` function
One function frequently used in templates is the `default` function: `default
DEFAULT_VALUE GIVEN_VALUE`. This function allows you to specify a default value
inside of the template, in case the value is omitted. Let's use it to modify the
drink example above:
```yaml
drink:
```
If we run this as normal, we'll get our `coffee`:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: virtuous-mink-configmap
data:
myvalue: "Hello World"
drink: "coffee"
food: "PIZZA"
```
Now, we will remove the favorite drink setting from `values.yaml`:
```yaml
favorite:
#drink: coffee
food: pizza
```
Now re-running `helm install --dry-run --debug fair-worm ./mychart` will produce
this YAML:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: fair-worm-configmap
data:
myvalue: "Hello World"
drink: "tea"
food: "PIZZA"
```
In an actual chart, all static default values should live in the `values.yaml`,
and should not be repeated using the `default` command (otherwise they would be
redundant). However, the `default` command is perfect for computed values, which
cannot be declared inside `values.yaml`. For example:
```yaml
drink:
```
In some places, an `if` conditional guard may be better suited than `default`.
We'll see those in the next section.
Template functions and pipelines are a powerful way to transform information and
then insert it into your YAML. But sometimes it's necessary to add some template
logic that is a little more sophisticated than just inserting a string. In the
next section we will look at the control structures provided by the template
language.
## Using the `lookup` function
The `lookup` function can be used to _look up_ resources in a running cluster.
The synopsis of the lookup function is `lookup apiVersion, kind, namespace, name
-> resource or resource list`.
| parameter | type |
|------------|--------|
| apiVersion | string |
| kind | string |
| namespace | string |
| name | string |
Both `name` and `namespace` are optional and can be passed as an empty string
(`""`).
The following combination of parameters are possible:
| Behavior | Lookup function |
|----------------------------------------|--------------------------------------------|
| `kubectl get pod mypod -n mynamespace` | `lookup "v1" "Pod" "mynamespace" "mypod"` |
| `kubectl get pods -n mynamespace` | `lookup "v1" "Pod" "mynamespace" ""` |
| `kubectl get pods --all-namespaces` | `lookup "v1" "Pod" "" ""` |
| `kubectl get namespace mynamespace` | `lookup "v1" "Namespace" "" "mynamespace"` |
| `kubectl get namespaces` | `lookup "v1" "Namespace" "" ""` |
When `lookup` returns an object, it will return a dictionary. This dictionary
can be further navigated to extract specific values.
The following example will return the annotations present for the `mynamespace`
object:
```go
(lookup "v1" "Namespace" "" "mynamespace").metadata.annotations
```
When `lookup` returns a list of objects, it is possible to access the object
list via the `items` field:
```go
```
When no object is found, an empty value is returned. This can be used to check
for the existence of an object.
The `lookup` function uses Helm's existing Kubernetes connection configuration
to query Kubernetes. If any error is returned when interacting with calling the
API server (for example due to lack of permission to access a resource), Helm's
template processing will fail.
Keep in mind that Helm is not supposed to contact the Kubernetes API Server during
a `helm template|install|upgrade|delete|rollback --dry-run` operation. To test `lookup`
against a running cluster, `helm template|install|upgrade|delete|rollback --dry-run=server`
should be used instead to allow cluster connection.
## Operators are functions
For templates, the operators (`eq`, `ne`, `lt`, `gt`, `and`, `or` and so on) are
all implemented as functions. In pipelines, operations can be grouped with
parentheses (`(`, and `)`).
Now we can turn from functions and pipelines to flow control with conditions,
loops, and scope modifiers. | helm | title Template Functions and Pipelines description Using functions in templates weight 5 So far we ve seen how to place information into a template But that information is placed into the template unmodified Sometimes we want to transform the supplied data in a way that makes it more useable to us Let s start with a best practice When injecting strings from the Values object into the template we ought to quote these strings We can do that by calling the quote function in the template directive yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food Template functions follow the syntax functionName arg1 arg2 In the snippet above quote Values favorite drink calls the quote function and passes it a single argument Helm has over 60 available functions Some of them are defined by the Go template language https godoc org text template itself Most of the others are part of the Sprig template library https masterminds github io sprig We ll see many of them as we progress through the examples While we talk about the Helm template language as if it is Helm specific it is actually a combination of the Go template language some extra functions and a variety of wrappers to expose certain objects to the templates Many resources on Go templates may be helpful as you learn about templating Pipelines One of the powerful features of the template language is its concept of pipelines Drawing on a concept from UNIX pipelines are a tool for chaining together a series of template commands to compactly express a series of transformations In other words pipelines are an efficient way of getting several things done in sequence Let s rewrite the above example using a pipeline yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food In this example instead of calling quote ARGUMENT we inverted the order We sent the argument to the function using a pipeline Values favorite drink quote Using pipelines we can chain several functions together yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food Inverting the order is a common practice in templates You will see val quote more often than quote val Either practice is fine When evaluated that template will produce this yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name trendsetting p configmap data myvalue Hello World drink coffee food PIZZA Note that our original pizza has now been transformed to PIZZA When pipelining arguments like this the result of the first evaluation Values favorite drink is sent as the last argument to the function We can modify the drink example above to illustrate with a function that takes two arguments repeat COUNT STRING yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food The repeat function will echo the given string the given number of times so we will get this for output yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name melting porcup configmap data myvalue Hello World drink coffeecoffeecoffeecoffeecoffee food PIZZA Using the default function One function frequently used in templates is the default function default DEFAULT VALUE GIVEN VALUE This function allows you to specify a default value inside of the template in case the value is omitted Let s use it to modify the drink example above yaml drink If we run this as normal we ll get our coffee yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name virtuous mink configmap data myvalue Hello World drink coffee food PIZZA Now we will remove the favorite drink setting from values yaml yaml favorite drink coffee food pizza Now re running helm install dry run debug fair worm mychart will produce this YAML yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name fair worm configmap data myvalue Hello World drink tea food PIZZA In an actual chart all static default values should live in the values yaml and should not be repeated using the default command otherwise they would be redundant However the default command is perfect for computed values which cannot be declared inside values yaml For example yaml drink In some places an if conditional guard may be better suited than default We ll see those in the next section Template functions and pipelines are a powerful way to transform information and then insert it into your YAML But sometimes it s necessary to add some template logic that is a little more sophisticated than just inserting a string In the next section we will look at the control structures provided by the template language Using the lookup function The lookup function can be used to look up resources in a running cluster The synopsis of the lookup function is lookup apiVersion kind namespace name resource or resource list parameter type apiVersion string kind string namespace string name string Both name and namespace are optional and can be passed as an empty string The following combination of parameters are possible Behavior Lookup function kubectl get pod mypod n mynamespace lookup v1 Pod mynamespace mypod kubectl get pods n mynamespace lookup v1 Pod mynamespace kubectl get pods all namespaces lookup v1 Pod kubectl get namespace mynamespace lookup v1 Namespace mynamespace kubectl get namespaces lookup v1 Namespace When lookup returns an object it will return a dictionary This dictionary can be further navigated to extract specific values The following example will return the annotations present for the mynamespace object go lookup v1 Namespace mynamespace metadata annotations When lookup returns a list of objects it is possible to access the object list via the items field go When no object is found an empty value is returned This can be used to check for the existence of an object The lookup function uses Helm s existing Kubernetes connection configuration to query Kubernetes If any error is returned when interacting with calling the API server for example due to lack of permission to access a resource Helm s template processing will fail Keep in mind that Helm is not supposed to contact the Kubernetes API Server during a helm template install upgrade delete rollback dry run operation To test lookup against a running cluster helm template install upgrade delete rollback dry run server should be used instead to allow cluster connection Operators are functions For templates the operators eq ne lt gt and or and so on are all implemented as functions In pipelines operations can be grouped with parentheses and Now we can turn from functions and pipelines to flow control with conditions loops and scope modifiers |
helm templates This makes it easy to import one template from within another How to access files from within a template template and inject its contents without sending the contents through the weight 10 In the previous section we looked at several ways to create and access named title Accessing Files Inside Templates template But sometimes it is desirable to import a file that is not a | ---
title: "Accessing Files Inside Templates"
description: "How to access files from within a template."
weight: 10
---
In the previous section we looked at several ways to create and access named
templates. This makes it easy to import one template from within another
template. But sometimes it is desirable to import a _file that is not a
template_ and inject its contents without sending the contents through the
template renderer.
Helm provides access to files through the `.Files` object. Before we get going
with the template examples, though, there are a few things to note about how
this works:
- It is okay to add extra files to your Helm chart. These files will be bundled.
Be careful, though. Charts must be smaller than 1M because of the storage
limitations of Kubernetes objects.
- Some files cannot be accessed through the `.Files` object, usually for
security reasons.
- Files in `templates/` cannot be accessed.
- Files excluded using `.helmignore` cannot be accessed.
- Files outside of a Helm application [subchart](), including those of the parent, cannot be accessed
- Charts do not preserve UNIX mode information, so file-level permissions will
have no impact on the availability of a file when it comes to the `.Files`
object.
<!-- (see https://github.com/jonschlinkert/markdown-toc) -->
<!-- toc -->
- [Basic example](#basic-example)
- [Path helpers](#path-helpers)
- [Glob patterns](#glob-patterns)
- [ConfigMap and Secrets utility functions](#configmap-and-secrets-utility-functions)
- [Encoding](#encoding)
- [Lines](#lines)
<!-- tocstop -->
## Basic example
With those caveats behind, let's write a template that reads three files into
our ConfigMap. To get started, we will add three files to the chart, putting all
three directly inside of the `mychart/` directory.
`config1.toml`:
```toml
message = Hello from config 1
```
`config2.toml`:
```toml
message = This is config 2
```
`config3.toml`:
```toml
message = Goodbye from config 3
```
Each of these is a simple TOML file (think old-school Windows INI files). We
know the names of these files, so we can use a `range` function to loop through
them and inject their contents into our ConfigMap.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
: |-
```
This ConfigMap uses several of the techniques discussed in previous sections.
For example, we create a `$files` variable to hold a reference to the `.Files`
object. We also use the `tuple` function to create a list of files that we loop
through. Then we print each file name (`: |-`) followed by the contents
of the file ``.
Running this template will produce a single ConfigMap with the contents of all
three files:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: quieting-giraf-configmap
data:
config1.toml: |-
message = Hello from config 1
config2.toml: |-
message = This is config 2
config3.toml: |-
message = Goodbye from config 3
```
## Path helpers
When working with files, it can be very useful to perform some standard
operations on the file paths themselves. To help with this, Helm imports many of
the functions from Go's [path](https://golang.org/pkg/path/) package for your
use. They are all accessible with the same names as in the Go package, but with
a lowercase first letter. For example, `Base` becomes `base`, etc.
The imported functions are:
- Base
- Dir
- Ext
- IsAbs
- Clean
## Glob patterns
As your chart grows, you may find you have a greater need to organize your files
more, and so we provide a `Files.Glob(pattern string)` method to assist in
extracting certain files with all the flexibility of [glob
patterns](https://godoc.org/github.com/gobwas/glob).
`.Glob` returns a `Files` type, so you may call any of the `Files` methods on
the returned object.
For example, imagine the directory structure:
```
foo/:
foo.txt foo.yaml
bar/:
bar.go bar.conf baz.yaml
```
You have multiple options with Globs:
```yaml
```
Or
```yaml
```
## ConfigMap and Secrets utility functions
(Available Helm 2.0.2 and after)
It is very common to want to place file content into both ConfigMaps and
Secrets, for mounting into your pods at run time. To help with this, we provide
a couple utility methods on the `Files` type.
For further organization, it is especially useful to use these methods in
conjunction with the `Glob` method.
Given the directory structure from the [Glob](#glob-patterns) example above:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
---
apiVersion: v1
kind: Secret
metadata:
name: very-secret
type: Opaque
data:
```
## Encoding
You can import a file and have the template base-64 encode it to ensure
successful transmission:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: -secret
type: Opaque
data:
token: |-
```
The above will take the same `config1.toml` file we used before and encode it:
```yaml
# Source: mychart/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: lucky-turkey-secret
type: Opaque
data:
token: |-
bWVzc2FnZSA9IEhlbGxvIGZyb20gY29uZmlnIDEK
```
## Lines
Sometimes it is desirable to access each line of a file in your template. We
provide a convenient `Lines` method for this.
You can loop through `Lines` using a `range` function:
```yaml
data:
some-file.txt:
```
There is no way to pass files external to the chart during `helm install`. So if
you are asking users to supply data, it must be loaded using `helm install -f`
or `helm install --set`.
This discussion wraps up our dive into the tools and techniques for writing Helm
templates. In the next section we will see how you can use one special file,
`templates/NOTES.txt`, to send post-installation instructions to the users of
your chart. | helm | title Accessing Files Inside Templates description How to access files from within a template weight 10 In the previous section we looked at several ways to create and access named templates This makes it easy to import one template from within another template But sometimes it is desirable to import a file that is not a template and inject its contents without sending the contents through the template renderer Helm provides access to files through the Files object Before we get going with the template examples though there are a few things to note about how this works It is okay to add extra files to your Helm chart These files will be bundled Be careful though Charts must be smaller than 1M because of the storage limitations of Kubernetes objects Some files cannot be accessed through the Files object usually for security reasons Files in templates cannot be accessed Files excluded using helmignore cannot be accessed Files outside of a Helm application subchart including those of the parent cannot be accessed Charts do not preserve UNIX mode information so file level permissions will have no impact on the availability of a file when it comes to the Files object see https github com jonschlinkert markdown toc toc Basic example basic example Path helpers path helpers Glob patterns glob patterns ConfigMap and Secrets utility functions configmap and secrets utility functions Encoding encoding Lines lines tocstop Basic example With those caveats behind let s write a template that reads three files into our ConfigMap To get started we will add three files to the chart putting all three directly inside of the mychart directory config1 toml toml message Hello from config 1 config2 toml toml message This is config 2 config3 toml toml message Goodbye from config 3 Each of these is a simple TOML file think old school Windows INI files We know the names of these files so we can use a range function to loop through them and inject their contents into our ConfigMap yaml apiVersion v1 kind ConfigMap metadata name configmap data This ConfigMap uses several of the techniques discussed in previous sections For example we create a files variable to hold a reference to the Files object We also use the tuple function to create a list of files that we loop through Then we print each file name followed by the contents of the file Running this template will produce a single ConfigMap with the contents of all three files yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name quieting giraf configmap data config1 toml message Hello from config 1 config2 toml message This is config 2 config3 toml message Goodbye from config 3 Path helpers When working with files it can be very useful to perform some standard operations on the file paths themselves To help with this Helm imports many of the functions from Go s path https golang org pkg path package for your use They are all accessible with the same names as in the Go package but with a lowercase first letter For example Base becomes base etc The imported functions are Base Dir Ext IsAbs Clean Glob patterns As your chart grows you may find you have a greater need to organize your files more and so we provide a Files Glob pattern string method to assist in extracting certain files with all the flexibility of glob patterns https godoc org github com gobwas glob Glob returns a Files type so you may call any of the Files methods on the returned object For example imagine the directory structure foo foo txt foo yaml bar bar go bar conf baz yaml You have multiple options with Globs yaml Or yaml ConfigMap and Secrets utility functions Available Helm 2 0 2 and after It is very common to want to place file content into both ConfigMaps and Secrets for mounting into your pods at run time To help with this we provide a couple utility methods on the Files type For further organization it is especially useful to use these methods in conjunction with the Glob method Given the directory structure from the Glob glob patterns example above yaml apiVersion v1 kind ConfigMap metadata name conf data apiVersion v1 kind Secret metadata name very secret type Opaque data Encoding You can import a file and have the template base 64 encode it to ensure successful transmission yaml apiVersion v1 kind Secret metadata name secret type Opaque data token The above will take the same config1 toml file we used before and encode it yaml Source mychart templates secret yaml apiVersion v1 kind Secret metadata name lucky turkey secret type Opaque data token bWVzc2FnZSA9IEhlbGxvIGZyb20gY29uZmlnIDEK Lines Sometimes it is desirable to access each line of a file in your template We provide a convenient Lines method for this You can loop through Lines using a range function yaml data some file txt There is no way to pass files external to the chart during helm install So if you are asking users to supply data it must be loaded using helm install f or helm install set This discussion wraps up our dive into the tools and techniques for writing Helm templates In the next section we will see how you can use one special file templates NOTES txt to send post installation instructions to the users of your chart |
helm dependencies called subcharts that also have their own values and templates weight 11 Interacting with a subchart s and global values To this point we have been working only with one chart But charts can have In this section we will create a subchart and see the different ways we can title Subcharts and Global Values access values from within templates | ---
title: "Subcharts and Global Values"
description: "Interacting with a subchart's and global values."
weight: 11
---
To this point we have been working only with one chart. But charts can have
dependencies, called _subcharts_, that also have their own values and templates.
In this section we will create a subchart and see the different ways we can
access values from within templates.
Before we dive into the code, there are a few important details to learn about application subcharts.
1. A subchart is considered "stand-alone", which means a subchart can never
explicitly depend on its parent chart.
2. For that reason, a subchart cannot access the values of its parent.
3. A parent chart can override values for subcharts.
4. Helm has a concept of _global values_ that can be accessed by all charts.
> These limitations do not all necessarily apply to [library charts](), which are designed to provide standardized helper functionality.
As we walk through the examples in this section, many of these concepts will
become clearer.
## Creating a Subchart
For these exercises, we'll start with the `mychart/` chart we created at the
beginning of this guide, and we'll add a new chart inside of it.
```console
$ cd mychart/charts
$ helm create mysubchart
Creating mysubchart
$ rm -rf mysubchart/templates/*
```
Notice that just as before, we deleted all of the base templates so that we can
start from scratch. In this guide, we are focused on how templates work, not on
managing dependencies. But the [Charts Guide]()
has more information on how subcharts work.
## Adding Values and a Template to the Subchart
Next, let's create a simple template and values file for our `mysubchart` chart.
There should already be a `values.yaml` in `mychart/charts/mysubchart`. We'll
set it up like this:
```yaml
dessert: cake
```
Next, we'll create a new ConfigMap template in
`mychart/charts/mysubchart/templates/configmap.yaml`:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -cfgmap2
data:
dessert:
```
Because every subchart is a _stand-alone chart_, we can test `mysubchart` on its
own:
```console
$ helm install --generate-name --dry-run --debug mychart/charts/mysubchart
SERVER: "localhost:44134"
CHART PATH: /Users/mattbutcher/Code/Go/src/helm.sh/helm/_scratch/mychart/charts/mysubchart
NAME: newbie-elk
TARGET NAMESPACE: default
CHART: mysubchart 0.1.0
MANIFEST:
---
# Source: mysubchart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: newbie-elk-cfgmap2
data:
dessert: cake
```
## Overriding Values from a Parent Chart
Our original chart, `mychart` is now the _parent_ chart of `mysubchart`. This
relationship is based entirely on the fact that `mysubchart` is within
`mychart/charts`.
Because `mychart` is a parent, we can specify configuration in `mychart` and
have that configuration pushed into `mysubchart`. For example, we can modify
`mychart/values.yaml` like this:
```yaml
favorite:
drink: coffee
food: pizza
pizzaToppings:
- mushrooms
- cheese
- peppers
- onions
mysubchart:
dessert: ice cream
```
Note the last two lines. Any directives inside of the `mysubchart` section will
be sent to the `mysubchart` chart. So if we run `helm install --generate-name --dry-run --debug
mychart`, one of the things we will see is the `mysubchart` ConfigMap:
```yaml
# Source: mychart/charts/mysubchart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: unhinged-bee-cfgmap2
data:
dessert: ice cream
```
The value at the top level has now overridden the value of the subchart.
There's an important detail to notice here. We didn't change the template of
`mychart/charts/mysubchart/templates/configmap.yaml` to point to
`.Values.mysubchart.dessert`. From that template's perspective, the value is
still located at `.Values.dessert`. As the template engine passes values along,
it sets the scope. So for the `mysubchart` templates, only values specifically
for `mysubchart` will be available in `.Values`.
Sometimes, though, you do want certain values to be available to all of the
templates. This is accomplished using global chart values.
## Global Chart Values
Global values are values that can be accessed from any chart or subchart by
exactly the same name. Globals require explicit declaration. You can't use an
existing non-global as if it were a global.
The Values data type has a reserved section called `Values.global` where global
values can be set. Let's set one in our `mychart/values.yaml` file.
```yaml
favorite:
drink: coffee
food: pizza
pizzaToppings:
- mushrooms
- cheese
- peppers
- onions
mysubchart:
dessert: ice cream
global:
salad: caesar
```
Because of the way globals work, both `mychart/templates/configmap.yaml` and
`mysubchart/templates/configmap.yaml` should be able to access that value as
``.
`mychart/templates/configmap.yaml`:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
salad:
```
`mysubchart/templates/configmap.yaml`:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -cfgmap2
data:
dessert:
salad:
```
Now if we run a dry run install, we'll see the same value in both outputs:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: silly-snake-configmap
data:
salad: caesar
---
# Source: mychart/charts/mysubchart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: silly-snake-cfgmap2
data:
dessert: ice cream
salad: caesar
```
Globals are useful for passing information like this, though it does take some
planning to make sure the right templates are configured to use globals.
## Sharing Templates with Subcharts
Parent charts and subcharts can share templates. Any defined block in any chart
is available to other charts.
For example, we can define a simple template like this:
```yaml
from: mychart
```
Recall how the labels on templates are _globally shared_. Thus, the `labels`
chart can be included from any other chart.
While chart developers have a choice between `include` and `template`, one
advantage of using `include` is that `include` can dynamically reference
templates:
```yaml
```
The above will dereference `$mytemplate`. The `template` function, in contrast,
will only accept a string literal.
## Avoid Using Blocks
The Go template language provides a `block` keyword that allows developers to
provide a default implementation which is overridden later. In Helm charts,
blocks are not the best tool for overriding because if multiple implementations
of the same block are provided, the one selected is unpredictable.
The suggestion is to instead use `include`. | helm | title Subcharts and Global Values description Interacting with a subchart s and global values weight 11 To this point we have been working only with one chart But charts can have dependencies called subcharts that also have their own values and templates In this section we will create a subchart and see the different ways we can access values from within templates Before we dive into the code there are a few important details to learn about application subcharts 1 A subchart is considered stand alone which means a subchart can never explicitly depend on its parent chart 2 For that reason a subchart cannot access the values of its parent 3 A parent chart can override values for subcharts 4 Helm has a concept of global values that can be accessed by all charts These limitations do not all necessarily apply to library charts which are designed to provide standardized helper functionality As we walk through the examples in this section many of these concepts will become clearer Creating a Subchart For these exercises we ll start with the mychart chart we created at the beginning of this guide and we ll add a new chart inside of it console cd mychart charts helm create mysubchart Creating mysubchart rm rf mysubchart templates Notice that just as before we deleted all of the base templates so that we can start from scratch In this guide we are focused on how templates work not on managing dependencies But the Charts Guide has more information on how subcharts work Adding Values and a Template to the Subchart Next let s create a simple template and values file for our mysubchart chart There should already be a values yaml in mychart charts mysubchart We ll set it up like this yaml dessert cake Next we ll create a new ConfigMap template in mychart charts mysubchart templates configmap yaml yaml apiVersion v1 kind ConfigMap metadata name cfgmap2 data dessert Because every subchart is a stand alone chart we can test mysubchart on its own console helm install generate name dry run debug mychart charts mysubchart SERVER localhost 44134 CHART PATH Users mattbutcher Code Go src helm sh helm scratch mychart charts mysubchart NAME newbie elk TARGET NAMESPACE default CHART mysubchart 0 1 0 MANIFEST Source mysubchart templates configmap yaml apiVersion v1 kind ConfigMap metadata name newbie elk cfgmap2 data dessert cake Overriding Values from a Parent Chart Our original chart mychart is now the parent chart of mysubchart This relationship is based entirely on the fact that mysubchart is within mychart charts Because mychart is a parent we can specify configuration in mychart and have that configuration pushed into mysubchart For example we can modify mychart values yaml like this yaml favorite drink coffee food pizza pizzaToppings mushrooms cheese peppers onions mysubchart dessert ice cream Note the last two lines Any directives inside of the mysubchart section will be sent to the mysubchart chart So if we run helm install generate name dry run debug mychart one of the things we will see is the mysubchart ConfigMap yaml Source mychart charts mysubchart templates configmap yaml apiVersion v1 kind ConfigMap metadata name unhinged bee cfgmap2 data dessert ice cream The value at the top level has now overridden the value of the subchart There s an important detail to notice here We didn t change the template of mychart charts mysubchart templates configmap yaml to point to Values mysubchart dessert From that template s perspective the value is still located at Values dessert As the template engine passes values along it sets the scope So for the mysubchart templates only values specifically for mysubchart will be available in Values Sometimes though you do want certain values to be available to all of the templates This is accomplished using global chart values Global Chart Values Global values are values that can be accessed from any chart or subchart by exactly the same name Globals require explicit declaration You can t use an existing non global as if it were a global The Values data type has a reserved section called Values global where global values can be set Let s set one in our mychart values yaml file yaml favorite drink coffee food pizza pizzaToppings mushrooms cheese peppers onions mysubchart dessert ice cream global salad caesar Because of the way globals work both mychart templates configmap yaml and mysubchart templates configmap yaml should be able to access that value as mychart templates configmap yaml yaml apiVersion v1 kind ConfigMap metadata name configmap data salad mysubchart templates configmap yaml yaml apiVersion v1 kind ConfigMap metadata name cfgmap2 data dessert salad Now if we run a dry run install we ll see the same value in both outputs yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name silly snake configmap data salad caesar Source mychart charts mysubchart templates configmap yaml apiVersion v1 kind ConfigMap metadata name silly snake cfgmap2 data dessert ice cream salad caesar Globals are useful for passing information like this though it does take some planning to make sure the right templates are configured to use globals Sharing Templates with Subcharts Parent charts and subcharts can share templates Any defined block in any chart is available to other charts For example we can define a simple template like this yaml from mychart Recall how the labels on templates are globally shared Thus the labels chart can be included from any other chart While chart developers have a choice between include and template one advantage of using include is that include can dynamically reference templates yaml The above will dereference mytemplate The template function in contrast will only accept a string literal Avoid Using Blocks The Go template language provides a block keyword that allows developers to provide a default implementation which is overridden later In Helm charts blocks are not the best tool for overriding because if multiple implementations of the same block are provided the one selected is unpredictable The suggestion is to instead use include |
helm aliases intro gettingstarted In this section of the guide we ll create a chart and then add a first guide weight 2 template The chart we created here will be used throughout the rest of the A quick guide on Chart templates title Getting Started | ---
title: "Getting Started"
weight: 2
description: "A quick guide on Chart templates."
aliases: ["/intro/getting_started/"]
---
In this section of the guide, we'll create a chart and then add a first
template. The chart we created here will be used throughout the rest of the
guide.
To get going, let's take a brief look at a Helm chart.
## Charts
As described in the [Charts Guide](../../topics/charts), Helm charts are
structured like this:
```
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
```
The `templates/` directory is for template files. When Helm evaluates a chart,
it will send all of the files in the `templates/` directory through the template
rendering engine. It then collects the results of those templates and sends them
on to Kubernetes.
The `values.yaml` file is also important to templates. This file contains the
_default values_ for a chart. These values may be overridden by users during
`helm install` or `helm upgrade`.
The `Chart.yaml` file contains a description of the chart. You can access it
from within a template.
The `charts/` directory _may_ contain other charts
(which we call _subcharts_). Later in this guide we will see how those work when
it comes to template rendering.
## A Starter Chart
For this guide, we'll create a simple chart called `mychart`, and then we'll
create some templates inside of the chart.
```console
$ helm create mychart
Creating mychart
```
### A Quick Glimpse of `mychart/templates/`
If you take a look at the `mychart/templates/` directory, you'll notice a few
files already there.
- `NOTES.txt`: The "help text" for your chart. This will be displayed to your
users when they run `helm install`.
- `deployment.yaml`: A basic manifest for creating a Kubernetes
[deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
- `service.yaml`: A basic manifest for creating a [service
endpoint](https://kubernetes.io/docs/concepts/services-networking/service/) for your deployment
- `_helpers.tpl`: A place to put template helpers that you can re-use throughout
the chart
And what we're going to do is... _remove them all!_ That way we can work through
our tutorial from scratch. We'll actually create our own `NOTES.txt` and
`_helpers.tpl` as we go.
```console
$ rm -rf mychart/templates/*
```
When you're writing production grade charts, having basic versions of these
charts can be really useful. So in your day-to-day chart authoring, you probably
won't want to remove them.
## A First Template
The first template we are going to create will be a `ConfigMap`. In Kubernetes,
a ConfigMap is simply an object for storing configuration data. Other things,
like pods, can access the data in a ConfigMap.
Because ConfigMaps are basic resources, they make a great starting point for us.
Let's begin by creating a file called `mychart/templates/configmap.yaml`:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mychart-configmap
data:
myvalue: "Hello World"
```
**TIP:** Template names do not follow a rigid naming pattern. However, we
recommend using the extension `.yaml` for YAML files and `.tpl` for helpers.
The YAML file above is a bare-bones ConfigMap, having the minimal necessary
fields. By virtue of the fact that this file is in the `mychart/templates/`
directory, it will be sent through the template engine.
It is just fine to put a plain YAML file like this in the `mychart/templates/`
directory. When Helm reads this template, it will simply send it to Kubernetes
as-is.
With this simple template, we now have an installable chart. And we can install
it like this:
```console
$ helm install full-coral ./mychart
NAME: full-coral
LAST DEPLOYED: Tue Nov 1 17:36:01 2016
NAMESPACE: default
STATUS: DEPLOYED
REVISION: 1
TEST SUITE: None
```
Using Helm, we can retrieve the release and see the actual template that was
loaded.
```console
$ helm get manifest full-coral
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mychart-configmap
data:
myvalue: "Hello World"
```
The `helm get manifest` command takes a release name (`full-coral`) and prints
out all of the Kubernetes resources that were uploaded to the server. Each file
begins with `---` to indicate the start of a YAML document, and then is followed
by an automatically generated comment line that tells us what template file
generated this YAML document.
From there on, we can see that the YAML data is exactly what we put in our
`configmap.yaml` file.
Now we can uninstall our release: `helm uninstall full-coral`.
### Adding a Simple Template Call
Hard-coding the `name:` into a resource is usually considered to be bad
practice. Names should be unique to a release. So we might want to generate a
name field by inserting the release name.
**TIP:** The `name:` field is limited to 63 characters because of limitations to
the DNS system. For that reason, release names are limited to 53 characters.
Kubernetes 1.3 and earlier limited to only 24 characters (thus 14 character
names).
Let's alter `configmap.yaml` accordingly.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
```
The big change comes in the value of the `name:` field, which is now
`-configmap`.
> A template directive is enclosed in `` blocks.
The template directive `` injects the release name into the
template. The values that are passed into a template can be thought of as
_namespaced objects_, where a dot (`.`) separates each namespaced element.
The leading dot before `Release` indicates that we start with the top-most
namespace for this scope (we'll talk about scope in a bit). So we could read
`.Release.Name` as "start at the top namespace, find the `Release` object, then
look inside of it for an object called `Name`".
The `Release` object is one of the built-in objects for Helm, and we'll cover it
in more depth later. But for now, it is sufficient to say that this will display
the release name that the library assigns to our release.
Now when we install our resource, we'll immediately see the result of using this
template directive:
```console
$ helm install clunky-serval ./mychart
NAME: clunky-serval
LAST DEPLOYED: Tue Nov 1 17:45:37 2016
NAMESPACE: default
STATUS: DEPLOYED
REVISION: 1
TEST SUITE: None
```
You can run `helm get manifest clunky-serval` to see the entire generated YAML.
Note that the ConfigMap inside Kubernetes name is `clunky-serval-configmap`
instead of `mychart-configmap` previously.
At this point, we've seen templates at their most basic: YAML files that have
template directives embedded in ``. In the next part, we'll take a
deeper look into templates. But before moving on, there's one quick trick that
can make building templates faster: When you want to test the template
rendering, but not actually install anything, you can use `helm install --debug
--dry-run goodly-guppy ./mychart`. This will render the templates. But instead
of installing the chart, it will return the rendered template to you so you can
see the output:
```console
$ helm install --debug --dry-run goodly-guppy ./mychart
install.go:149: [debug] Original chart version: ""
install.go:166: [debug] CHART PATH: /Users/ninja/mychart
NAME: goodly-guppy
LAST DEPLOYED: Thu Dec 26 17:24:13 2019
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
affinity: {}
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: nginx
imagePullSecrets: []
ingress:
annotations: {}
enabled: false
hosts:
- host: chart-example.local
paths: []
tls: []
nameOverride: ""
nodeSelector: {}
podSecurityContext: {}
replicaCount: 1
resources: {}
securityContext: {}
service:
port: 80
type: ClusterIP
serviceAccount:
create: true
name: null
tolerations: []
HOOKS:
MANIFEST:
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: goodly-guppy-configmap
data:
myvalue: "Hello World"
```
Using `--dry-run` will make it easier to test your code, but it won't ensure
that Kubernetes itself will accept the templates you generate. It's best not to
assume that your chart will install just because `--dry-run` works.
In the [Chart Template Guide](_index.md), we take the basic chart we defined
here and explore the Helm template language in detail. And we'll get started
with built-in objects. | helm | title Getting Started weight 2 description A quick guide on Chart templates aliases intro getting started In this section of the guide we ll create a chart and then add a first template The chart we created here will be used throughout the rest of the guide To get going let s take a brief look at a Helm chart Charts As described in the Charts Guide topics charts Helm charts are structured like this mychart Chart yaml values yaml charts templates The templates directory is for template files When Helm evaluates a chart it will send all of the files in the templates directory through the template rendering engine It then collects the results of those templates and sends them on to Kubernetes The values yaml file is also important to templates This file contains the default values for a chart These values may be overridden by users during helm install or helm upgrade The Chart yaml file contains a description of the chart You can access it from within a template The charts directory may contain other charts which we call subcharts Later in this guide we will see how those work when it comes to template rendering A Starter Chart For this guide we ll create a simple chart called mychart and then we ll create some templates inside of the chart console helm create mychart Creating mychart A Quick Glimpse of mychart templates If you take a look at the mychart templates directory you ll notice a few files already there NOTES txt The help text for your chart This will be displayed to your users when they run helm install deployment yaml A basic manifest for creating a Kubernetes deployment https kubernetes io docs concepts workloads controllers deployment service yaml A basic manifest for creating a service endpoint https kubernetes io docs concepts services networking service for your deployment helpers tpl A place to put template helpers that you can re use throughout the chart And what we re going to do is remove them all That way we can work through our tutorial from scratch We ll actually create our own NOTES txt and helpers tpl as we go console rm rf mychart templates When you re writing production grade charts having basic versions of these charts can be really useful So in your day to day chart authoring you probably won t want to remove them A First Template The first template we are going to create will be a ConfigMap In Kubernetes a ConfigMap is simply an object for storing configuration data Other things like pods can access the data in a ConfigMap Because ConfigMaps are basic resources they make a great starting point for us Let s begin by creating a file called mychart templates configmap yaml yaml apiVersion v1 kind ConfigMap metadata name mychart configmap data myvalue Hello World TIP Template names do not follow a rigid naming pattern However we recommend using the extension yaml for YAML files and tpl for helpers The YAML file above is a bare bones ConfigMap having the minimal necessary fields By virtue of the fact that this file is in the mychart templates directory it will be sent through the template engine It is just fine to put a plain YAML file like this in the mychart templates directory When Helm reads this template it will simply send it to Kubernetes as is With this simple template we now have an installable chart And we can install it like this console helm install full coral mychart NAME full coral LAST DEPLOYED Tue Nov 1 17 36 01 2016 NAMESPACE default STATUS DEPLOYED REVISION 1 TEST SUITE None Using Helm we can retrieve the release and see the actual template that was loaded console helm get manifest full coral Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name mychart configmap data myvalue Hello World The helm get manifest command takes a release name full coral and prints out all of the Kubernetes resources that were uploaded to the server Each file begins with to indicate the start of a YAML document and then is followed by an automatically generated comment line that tells us what template file generated this YAML document From there on we can see that the YAML data is exactly what we put in our configmap yaml file Now we can uninstall our release helm uninstall full coral Adding a Simple Template Call Hard coding the name into a resource is usually considered to be bad practice Names should be unique to a release So we might want to generate a name field by inserting the release name TIP The name field is limited to 63 characters because of limitations to the DNS system For that reason release names are limited to 53 characters Kubernetes 1 3 and earlier limited to only 24 characters thus 14 character names Let s alter configmap yaml accordingly yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World The big change comes in the value of the name field which is now configmap A template directive is enclosed in blocks The template directive injects the release name into the template The values that are passed into a template can be thought of as namespaced objects where a dot separates each namespaced element The leading dot before Release indicates that we start with the top most namespace for this scope we ll talk about scope in a bit So we could read Release Name as start at the top namespace find the Release object then look inside of it for an object called Name The Release object is one of the built in objects for Helm and we ll cover it in more depth later But for now it is sufficient to say that this will display the release name that the library assigns to our release Now when we install our resource we ll immediately see the result of using this template directive console helm install clunky serval mychart NAME clunky serval LAST DEPLOYED Tue Nov 1 17 45 37 2016 NAMESPACE default STATUS DEPLOYED REVISION 1 TEST SUITE None You can run helm get manifest clunky serval to see the entire generated YAML Note that the ConfigMap inside Kubernetes name is clunky serval configmap instead of mychart configmap previously At this point we ve seen templates at their most basic YAML files that have template directives embedded in In the next part we ll take a deeper look into templates But before moving on there s one quick trick that can make building templates faster When you want to test the template rendering but not actually install anything you can use helm install debug dry run goodly guppy mychart This will render the templates But instead of installing the chart it will return the rendered template to you so you can see the output console helm install debug dry run goodly guppy mychart install go 149 debug Original chart version install go 166 debug CHART PATH Users ninja mychart NAME goodly guppy LAST DEPLOYED Thu Dec 26 17 24 13 2019 NAMESPACE default STATUS pending install REVISION 1 TEST SUITE None USER SUPPLIED VALUES COMPUTED VALUES affinity fullnameOverride image pullPolicy IfNotPresent repository nginx imagePullSecrets ingress annotations enabled false hosts host chart example local paths tls nameOverride nodeSelector podSecurityContext replicaCount 1 resources securityContext service port 80 type ClusterIP serviceAccount create true name null tolerations HOOKS MANIFEST Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name goodly guppy configmap data myvalue Hello World Using dry run will make it easier to test your code but it won t ensure that Kubernetes itself will accept the templates you generate It s best not to assume that your chart will install just because dry run works In the Chart Template Guide index md we take the basic chart we defined here and explore the Helm template language in detail And we ll get started with built in objects |
helm template authors can use to make our templates less error prone and easier to title Appendix YAML Techniques read Most of this guide has been focused on writing the template language Here we ll look at the YAML format YAML has some useful features that we as A closer look at the YAML specification and how it applies to Helm weight 15 | ---
title: "Appendix: YAML Techniques"
description: "A closer look at the YAML specification and how it applies to Helm."
weight: 15
---
Most of this guide has been focused on writing the template language. Here,
we'll look at the YAML format. YAML has some useful features that we, as
template authors, can use to make our templates less error prone and easier to
read.
## Scalars and Collections
According to the [YAML spec](https://yaml.org/spec/1.2/spec.html), there are two
types of collections, and many scalar types.
The two types of collections are maps and sequences:
```yaml
map:
one: 1
two: 2
three: 3
sequence:
- one
- two
- three
```
Scalar values are individual values (as opposed to collections)
### Scalar Types in YAML
In Helm's dialect of YAML, the scalar data type of a value is determined by a
complex set of rules, including the Kubernetes schema for resource definitions.
But when inferring types, the following rules tend to hold true.
If an integer or float is an unquoted bare word, it is typically treated as a
numeric type:
```yaml
count: 1
size: 2.34
```
But if they are quoted, they are treated as strings:
```yaml
count: "1" # <-- string, not int
size: '2.34' # <-- string, not float
```
The same is true of booleans:
```yaml
isGood: true # bool
answer: "true" # string
```
The word for an empty value is `null` (not `nil`).
Note that `port: "80"` is valid YAML, and will pass through both the template
engine and the YAML parser, but will fail if Kubernetes expects `port` to be an
integer.
In some cases, you can force a particular type inference using YAML node tags:
```yaml
coffee: "yes, please"
age: !!str 21
port: !!int "80"
```
In the above, `!!str` tells the parser that `age` is a string, even if it looks
like an int. And `port` is treated as an int, even though it is quoted.
## Strings in YAML
Much of the data that we place in YAML documents are strings. YAML has more than
one way to represent a string. This section explains the ways and demonstrates
how to use some of them.
There are three "inline" ways of declaring a string:
```yaml
way1: bare words
way2: "double-quoted strings"
way3: 'single-quoted strings'
```
All inline styles must be on one line.
- Bare words are unquoted, and are not escaped. For this reason, you have to be
careful what characters you use.
- Double-quoted strings can have specific characters escaped with `\`. For
example `"\"Hello\", she said"`. You can escape line breaks with `\n`.
- Single-quoted strings are "literal" strings, and do not use the `\` to escape
characters. The only escape sequence is `''`, which is decoded as a single
`'`.
In addition to the one-line strings, you can declare multi-line strings:
```yaml
coffee: |
Latte
Cappuccino
Espresso
```
The above will treat the value of `coffee` as a single string equivalent to
`Latte\nCappuccino\nEspresso\n`.
Note that the first line after the `|` must be correctly indented. So we could
break the example above by doing this:
```yaml
coffee: |
Latte
Cappuccino
Espresso
```
Because `Latte` is incorrectly indented, we'd get an error like this:
```
Error parsing file: error converting YAML to JSON: yaml: line 7: did not find expected key
```
In templates, it is sometimes safer to put a fake "first line" of content in a
multi-line document just for protection from the above error:
```yaml
coffee: |
# Commented first line
Latte
Cappuccino
Espresso
```
Note that whatever that first line is, it will be preserved in the output of the
string. So if you are, for example, using this technique to inject a file's
contents into a ConfigMap, the comment should be of the type expected by
whatever is reading that entry.
### Controlling Spaces in Multi-line Strings
In the example above, we used `|` to indicate a multi-line string. But notice
that the content of our string was followed with a trailing `\n`. If we want the
YAML processor to strip off the trailing newline, we can add a `-` after the
`|`:
```yaml
coffee: |-
Latte
Cappuccino
Espresso
```
Now the `coffee` value will be: `Latte\nCappuccino\nEspresso` (with no trailing
`\n`).
Other times, we might want all trailing whitespace to be preserved. We can do
this with the `|+` notation:
```yaml
coffee: |+
Latte
Cappuccino
Espresso
another: value
```
Now the value of `coffee` will be `Latte\nCappuccino\nEspresso\n\n\n`.
Indentation inside of a text block is preserved, and results in the preservation
of line breaks, too:
```yaml
coffee: |-
Latte
12 oz
16 oz
Cappuccino
Espresso
```
In the above case, `coffee` will be `Latte\n 12 oz\n 16
oz\nCappuccino\nEspresso`.
### Indenting and Templates
When writing templates, you may find yourself wanting to inject the contents of
a file into the template. As we saw in previous chapters, there are two ways of
doing this:
- Use `` to get the contents of a file in the chart.
- Use `` to render a template and then place its
contents into the chart.
When inserting files into YAML, it's good to understand the multi-line rules
above. Often times, the easiest way to insert a static file is to do something
like this:
```yaml
myfile: |
```
Note how we do the indentation above: `indent 2` tells the template engine to
indent every line in "myfile.txt" with two spaces. Note that we do not indent
that template line. That's because if we did, the file content of the first line
would be indented twice.
### Folded Multi-line Strings
Sometimes you want to represent a string in your YAML with multiple lines, but
want it to be treated as one long line when it is interpreted. This is called
"folding". To declare a folded block, use `>` instead of `|`:
```yaml
coffee: >
Latte
Cappuccino
Espresso
```
The value of `coffee` above will be `Latte Cappuccino Espresso\n`. Note that all
but the last line feed will be converted to spaces. You can combine the
whitespace controls with the folded text marker, so `>-` will replace or trim
all newlines.
Note that in the folded syntax, indenting text will cause lines to be preserved.
```yaml
coffee: >-
Latte
12 oz
16 oz
Cappuccino
Espresso
```
The above will produce `Latte\n 12 oz\n 16 oz\nCappuccino Espresso`. Note that
both the spacing and the newlines are still there.
## Embedding Multiple Documents in One File
It is possible to place more than one YAML document into a single file. This is
done by prefixing a new document with `---` and ending the document with
`...`
```yaml
---
document: 1
...
---
document: 2
...
```
In many cases, either the `---` or the `...` may be omitted.
Some files in Helm cannot contain more than one doc. If, for example, more than
one document is provided inside of a `values.yaml` file, only the first will be
used.
Template files, however, may have more than one document. When this happens, the
file (and all of its documents) is treated as one object during template
rendering. But then the resulting YAML is split into multiple documents before
it is fed to Kubernetes.
We recommend only using multiple documents per file when it is absolutely
necessary. Having multiple documents in a file can be difficult to debug.
## YAML is a Superset of JSON
Because YAML is a superset of JSON, any valid JSON document _should_ be valid
YAML.
```json
{
"coffee": "yes, please",
"coffees": [
"Latte", "Cappuccino", "Espresso"
]
}
```
The above is another way of representing this:
```yaml
coffee: yes, please
coffees:
- Latte
- Cappuccino
- Espresso
```
And the two can be mixed (with care):
```yaml
coffee: "yes, please"
coffees: [ "Latte", "Cappuccino", "Espresso"]
```
All three of these should parse into the same internal representation.
While this means that files such as `values.yaml` may contain JSON data, Helm
does not treat the file extension `.json` as a valid suffix.
## YAML Anchors
The YAML spec provides a way to store a reference to a value, and later refer to
that value by reference. YAML refers to this as "anchoring":
```yaml
coffee: "yes, please"
favorite: &favoriteCoffee "Cappuccino"
coffees:
- Latte
- *favoriteCoffee
- Espresso
```
In the above, `&favoriteCoffee` sets a reference to `Cappuccino`. Later, that
reference is used as `*favoriteCoffee`. So `coffees` becomes `Latte, Cappuccino,
Espresso`.
While there are a few cases where anchors are useful, there is one aspect of
them that can cause subtle bugs: The first time the YAML is consumed, the
reference is expanded and then discarded.
So if we were to decode and then re-encode the example above, the resulting YAML
would be:
```yaml
coffee: yes, please
favorite: Cappuccino
coffees:
- Latte
- Cappuccino
- Espresso
```
Because Helm and Kubernetes often read, modify, and then rewrite YAML files, the
anchors will be lost. | helm | title Appendix YAML Techniques description A closer look at the YAML specification and how it applies to Helm weight 15 Most of this guide has been focused on writing the template language Here we ll look at the YAML format YAML has some useful features that we as template authors can use to make our templates less error prone and easier to read Scalars and Collections According to the YAML spec https yaml org spec 1 2 spec html there are two types of collections and many scalar types The two types of collections are maps and sequences yaml map one 1 two 2 three 3 sequence one two three Scalar values are individual values as opposed to collections Scalar Types in YAML In Helm s dialect of YAML the scalar data type of a value is determined by a complex set of rules including the Kubernetes schema for resource definitions But when inferring types the following rules tend to hold true If an integer or float is an unquoted bare word it is typically treated as a numeric type yaml count 1 size 2 34 But if they are quoted they are treated as strings yaml count 1 string not int size 2 34 string not float The same is true of booleans yaml isGood true bool answer true string The word for an empty value is null not nil Note that port 80 is valid YAML and will pass through both the template engine and the YAML parser but will fail if Kubernetes expects port to be an integer In some cases you can force a particular type inference using YAML node tags yaml coffee yes please age str 21 port int 80 In the above str tells the parser that age is a string even if it looks like an int And port is treated as an int even though it is quoted Strings in YAML Much of the data that we place in YAML documents are strings YAML has more than one way to represent a string This section explains the ways and demonstrates how to use some of them There are three inline ways of declaring a string yaml way1 bare words way2 double quoted strings way3 single quoted strings All inline styles must be on one line Bare words are unquoted and are not escaped For this reason you have to be careful what characters you use Double quoted strings can have specific characters escaped with For example Hello she said You can escape line breaks with n Single quoted strings are literal strings and do not use the to escape characters The only escape sequence is which is decoded as a single In addition to the one line strings you can declare multi line strings yaml coffee Latte Cappuccino Espresso The above will treat the value of coffee as a single string equivalent to Latte nCappuccino nEspresso n Note that the first line after the must be correctly indented So we could break the example above by doing this yaml coffee Latte Cappuccino Espresso Because Latte is incorrectly indented we d get an error like this Error parsing file error converting YAML to JSON yaml line 7 did not find expected key In templates it is sometimes safer to put a fake first line of content in a multi line document just for protection from the above error yaml coffee Commented first line Latte Cappuccino Espresso Note that whatever that first line is it will be preserved in the output of the string So if you are for example using this technique to inject a file s contents into a ConfigMap the comment should be of the type expected by whatever is reading that entry Controlling Spaces in Multi line Strings In the example above we used to indicate a multi line string But notice that the content of our string was followed with a trailing n If we want the YAML processor to strip off the trailing newline we can add a after the yaml coffee Latte Cappuccino Espresso Now the coffee value will be Latte nCappuccino nEspresso with no trailing n Other times we might want all trailing whitespace to be preserved We can do this with the notation yaml coffee Latte Cappuccino Espresso another value Now the value of coffee will be Latte nCappuccino nEspresso n n n Indentation inside of a text block is preserved and results in the preservation of line breaks too yaml coffee Latte 12 oz 16 oz Cappuccino Espresso In the above case coffee will be Latte n 12 oz n 16 oz nCappuccino nEspresso Indenting and Templates When writing templates you may find yourself wanting to inject the contents of a file into the template As we saw in previous chapters there are two ways of doing this Use to get the contents of a file in the chart Use to render a template and then place its contents into the chart When inserting files into YAML it s good to understand the multi line rules above Often times the easiest way to insert a static file is to do something like this yaml myfile Note how we do the indentation above indent 2 tells the template engine to indent every line in myfile txt with two spaces Note that we do not indent that template line That s because if we did the file content of the first line would be indented twice Folded Multi line Strings Sometimes you want to represent a string in your YAML with multiple lines but want it to be treated as one long line when it is interpreted This is called folding To declare a folded block use instead of yaml coffee Latte Cappuccino Espresso The value of coffee above will be Latte Cappuccino Espresso n Note that all but the last line feed will be converted to spaces You can combine the whitespace controls with the folded text marker so will replace or trim all newlines Note that in the folded syntax indenting text will cause lines to be preserved yaml coffee Latte 12 oz 16 oz Cappuccino Espresso The above will produce Latte n 12 oz n 16 oz nCappuccino Espresso Note that both the spacing and the newlines are still there Embedding Multiple Documents in One File It is possible to place more than one YAML document into a single file This is done by prefixing a new document with and ending the document with yaml document 1 document 2 In many cases either the or the may be omitted Some files in Helm cannot contain more than one doc If for example more than one document is provided inside of a values yaml file only the first will be used Template files however may have more than one document When this happens the file and all of its documents is treated as one object during template rendering But then the resulting YAML is split into multiple documents before it is fed to Kubernetes We recommend only using multiple documents per file when it is absolutely necessary Having multiple documents in a file can be difficult to debug YAML is a Superset of JSON Because YAML is a superset of JSON any valid JSON document should be valid YAML json coffee yes please coffees Latte Cappuccino Espresso The above is another way of representing this yaml coffee yes please coffees Latte Cappuccino Espresso And the two can be mixed with care yaml coffee yes please coffees Latte Cappuccino Espresso All three of these should parse into the same internal representation While this means that files such as values yaml may contain JSON data Helm does not treat the file extension json as a valid suffix YAML Anchors The YAML spec provides a way to store a reference to a value and later refer to that value by reference YAML refers to this as anchoring yaml coffee yes please favorite favoriteCoffee Cappuccino coffees Latte favoriteCoffee Espresso In the above favoriteCoffee sets a reference to Cappuccino Later that reference is used as favoriteCoffee So coffees becomes Latte Cappuccino Espresso While there are a few cases where anchors are useful there is one aspect of them that can cause subtle bugs The first time the YAML is consumed the reference is expanded and then discarded So if we were to decode and then re encode the example above the resulting YAML would be yaml coffee yes please favorite Cappuccino coffees Latte Cappuccino Espresso Because Helm and Kubernetes often read modify and then rewrite YAML files the anchors will be lost |
helm It is time to move beyond one template and begin to create others In this them elsewhere A named template sometimes called a partial or a How to define named templates subtemplate is simply a template defined inside of a file and given a name section we will see how to define named templates in one file and then use weight 9 title Named Templates | ---
title: "Named Templates"
description: "How to define named templates."
weight: 9
---
It is time to move beyond one template, and begin to create others. In this
section, we will see how to define _named templates_ in one file, and then use
them elsewhere. A _named template_ (sometimes called a _partial_ or a
_subtemplate_) is simply a template defined inside of a file, and given a name.
We'll see two ways to create them, and a few different ways to use them.
In the [Flow Control](./control_structures.md) section we introduced three actions
for declaring and managing templates: `define`, `template`, and `block`. In this
section, we'll cover those three actions, and also introduce a special-purpose
`include` function that works similarly to the `template` action.
An important detail to keep in mind when naming templates: **template names are
global**. If you declare two templates with the same name, whichever one is
loaded last will be the one used. Because templates in subcharts are compiled
together with top-level templates, you should be careful to name your templates
with _chart-specific names_.
One popular naming convention is to prefix each defined template with the name
of the chart: ``. By using the specific chart name
as a prefix we can avoid any conflicts that may arise due to two different
charts that implement templates of the same name.
This behavior also applies to different versions of a chart. If you have
`mychart` version `1.0.0` that defines a template one way, and a `mychart`
version `2.0.0` that modifies the existing named template, it will use the one
that was loaded last. You can work around this issue by also adding a version
in the name of the chart: `` and
``.
## Partials and `_` files
So far, we've used one file, and that one file has contained a single template.
But Helm's template language allows you to create named embedded templates, that
can be accessed by name elsewhere.
Before we get to the nuts-and-bolts of writing those templates, there is file
naming convention that deserves mention:
* Most files in `templates/` are treated as if they contain Kubernetes manifests
* The `NOTES.txt` is one exception
* But files whose name begins with an underscore (`_`) are assumed to _not_ have
a manifest inside. These files are not rendered to Kubernetes object
definitions, but are available everywhere within other chart templates for
use.
These files are used to store partials and helpers. In fact, when we first
created `mychart`, we saw a file called `_helpers.tpl`. That file is the default
location for template partials.
## Declaring and using templates with `define` and `template`
The `define` action allows us to create a named template inside of a template
file. Its syntax goes like this:
```yaml
# body of template here
```
For example, we can define a template to encapsulate a Kubernetes block of
labels:
```yaml
labels:
generator: helm
date:
```
Now we can embed this template inside of our existing ConfigMap, and then
include it with the `template` action:
```yaml
labels:
generator: helm
date:
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
:
```
When the template engine reads this file, it will store away the reference to
`mychart.labels` until `template "mychart.labels"` is called. Then it will
render that template inline. So the result will look like this:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: running-panda-configmap
labels:
generator: helm
date: 2016-11-02
data:
myvalue: "Hello World"
drink: "coffee"
food: "pizza"
```
Note: a `define` does not produce output unless it is called with a template,
as in this example.
Conventionally, Helm charts put these templates inside of a partials file,
usually `_helpers.tpl`. Let's move this function there:
```yaml
labels:
generator: helm
date:
```
By convention, `define` functions should have a simple documentation block
(``) describing what they do.
Even though this definition is in `_helpers.tpl`, it can still be accessed in
`configmap.yaml`:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
:
```
As mentioned above, **template names are global**. As a result of this, if two
templates are declared with the same name the last occurrence will be the one
that is used. Since templates in subcharts are compiled together with top-level
templates, it is best to name your templates with _chart specific names_. A
popular naming convention is to prefix each defined template with the name of
the chart: ``.
## Setting the scope of a template
In the template we defined above, we did not use any objects. We just used
functions. Let's modify our defined template to include the chart name and chart
version:
```yaml
labels:
generator: helm
date:
chart:
version:
```
If we render this, we will get an error like this:
```console
$ helm install --dry-run moldy-jaguar ./mychart
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [unknown object type "nil" in ConfigMap.metadata.labels.chart, unknown object type "nil" in ConfigMap.metadata.labels.version]
```
To see what rendered, re-run with `--disable-openapi-validation`:
`helm install --dry-run --disable-openapi-validation moldy-jaguar ./mychart`.
The result will not be what we expect:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: moldy-jaguar-configmap
labels:
generator: helm
date: 2021-03-06
chart:
version:
```
What happened to the name and version? They weren't in the scope for our defined
template. When a named template (created with `define`) is rendered, it will
receive the scope passed in by the `template` call. In our example, we included
the template like this:
```yaml
```
No scope was passed in, so within the template we cannot access anything in `.`.
This is easy enough to fix, though. We simply pass a scope to the template:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
```
Note that we pass `.` at the end of the `template` call. We could just as easily
pass `.Values` or `.Values.favorite` or whatever scope we want. But what we want
is the top-level scope.
Now when we execute this template with `helm install --dry-run --debug
plinking-anaco ./mychart`, we get this:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: plinking-anaco-configmap
labels:
generator: helm
date: 2021-03-06
chart: mychart
version: 0.1.0
```
Now `` resolves to `mychart`, and ``
resolves to `0.1.0`.
## The `include` function
Say we've defined a simple template that looks like this:
```yaml
app_name:
app_version: ""
```
Now say I want to insert this both into the `labels:` section of my template,
and also the `data:` section:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
labels:
data:
myvalue: "Hello World"
:
```
If we render this, we will get an error like this:
```console
$ helm install --dry-run measly-whippet ./mychart
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(ConfigMap): unknown field "app_name" in io.k8s.api.core.v1.ConfigMap, ValidationError(ConfigMap): unknown field "app_version" in io.k8s.api.core.v1.ConfigMap]
```
To see what rendered, re-run with `--disable-openapi-validation`:
`helm install --dry-run --disable-openapi-validation measly-whippet ./mychart`.
The output will not be what we expect:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: measly-whippet-configmap
labels:
app_name: mychart
app_version: "0.1.0"
data:
myvalue: "Hello World"
drink: "coffee"
food: "pizza"
app_name: mychart
app_version: "0.1.0"
```
Note that the indentation on `app_version` is wrong in both places. Why? Because
the template that is substituted in has the text aligned to the left. Because
`template` is an action, and not a function, there is no way to pass the output
of a `template` call to other functions; the data is simply inserted inline.
To work around this case, Helm provides an alternative to `template` that will
import the contents of a template into the present pipeline where it can be
passed along to other functions in the pipeline.
Here's the example above, corrected to use `indent` to indent the `mychart.app`
template correctly:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
labels:
data:
myvalue: "Hello World"
:
```
Now the produced YAML is correctly indented for each section:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: edgy-mole-configmap
labels:
app_name: mychart
app_version: "0.1.0"
data:
myvalue: "Hello World"
drink: "coffee"
food: "pizza"
app_name: mychart
app_version: "0.1.0"
```
> It is considered preferable to use `include` over `template` in Helm templates
> simply so that the output formatting can be handled better for YAML documents.
Sometimes we want to import content, but not as templates. That is, we want to
import files verbatim. We can achieve this by accessing files through the
`.Files` object described in the next section. | helm | title Named Templates description How to define named templates weight 9 It is time to move beyond one template and begin to create others In this section we will see how to define named templates in one file and then use them elsewhere A named template sometimes called a partial or a subtemplate is simply a template defined inside of a file and given a name We ll see two ways to create them and a few different ways to use them In the Flow Control control structures md section we introduced three actions for declaring and managing templates define template and block In this section we ll cover those three actions and also introduce a special purpose include function that works similarly to the template action An important detail to keep in mind when naming templates template names are global If you declare two templates with the same name whichever one is loaded last will be the one used Because templates in subcharts are compiled together with top level templates you should be careful to name your templates with chart specific names One popular naming convention is to prefix each defined template with the name of the chart By using the specific chart name as a prefix we can avoid any conflicts that may arise due to two different charts that implement templates of the same name This behavior also applies to different versions of a chart If you have mychart version 1 0 0 that defines a template one way and a mychart version 2 0 0 that modifies the existing named template it will use the one that was loaded last You can work around this issue by also adding a version in the name of the chart and Partials and files So far we ve used one file and that one file has contained a single template But Helm s template language allows you to create named embedded templates that can be accessed by name elsewhere Before we get to the nuts and bolts of writing those templates there is file naming convention that deserves mention Most files in templates are treated as if they contain Kubernetes manifests The NOTES txt is one exception But files whose name begins with an underscore are assumed to not have a manifest inside These files are not rendered to Kubernetes object definitions but are available everywhere within other chart templates for use These files are used to store partials and helpers In fact when we first created mychart we saw a file called helpers tpl That file is the default location for template partials Declaring and using templates with define and template The define action allows us to create a named template inside of a template file Its syntax goes like this yaml body of template here For example we can define a template to encapsulate a Kubernetes block of labels yaml labels generator helm date Now we can embed this template inside of our existing ConfigMap and then include it with the template action yaml labels generator helm date apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World When the template engine reads this file it will store away the reference to mychart labels until template mychart labels is called Then it will render that template inline So the result will look like this yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name running panda configmap labels generator helm date 2016 11 02 data myvalue Hello World drink coffee food pizza Note a define does not produce output unless it is called with a template as in this example Conventionally Helm charts put these templates inside of a partials file usually helpers tpl Let s move this function there yaml labels generator helm date By convention define functions should have a simple documentation block describing what they do Even though this definition is in helpers tpl it can still be accessed in configmap yaml yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World As mentioned above template names are global As a result of this if two templates are declared with the same name the last occurrence will be the one that is used Since templates in subcharts are compiled together with top level templates it is best to name your templates with chart specific names A popular naming convention is to prefix each defined template with the name of the chart Setting the scope of a template In the template we defined above we did not use any objects We just used functions Let s modify our defined template to include the chart name and chart version yaml labels generator helm date chart version If we render this we will get an error like this console helm install dry run moldy jaguar mychart Error unable to build kubernetes objects from release manifest error validating error validating data unknown object type nil in ConfigMap metadata labels chart unknown object type nil in ConfigMap metadata labels version To see what rendered re run with disable openapi validation helm install dry run disable openapi validation moldy jaguar mychart The result will not be what we expect yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name moldy jaguar configmap labels generator helm date 2021 03 06 chart version What happened to the name and version They weren t in the scope for our defined template When a named template created with define is rendered it will receive the scope passed in by the template call In our example we included the template like this yaml No scope was passed in so within the template we cannot access anything in This is easy enough to fix though We simply pass a scope to the template yaml apiVersion v1 kind ConfigMap metadata name configmap Note that we pass at the end of the template call We could just as easily pass Values or Values favorite or whatever scope we want But what we want is the top level scope Now when we execute this template with helm install dry run debug plinking anaco mychart we get this yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name plinking anaco configmap labels generator helm date 2021 03 06 chart mychart version 0 1 0 Now resolves to mychart and resolves to 0 1 0 The include function Say we ve defined a simple template that looks like this yaml app name app version Now say I want to insert this both into the labels section of my template and also the data section yaml apiVersion v1 kind ConfigMap metadata name configmap labels data myvalue Hello World If we render this we will get an error like this console helm install dry run measly whippet mychart Error unable to build kubernetes objects from release manifest error validating error validating data ValidationError ConfigMap unknown field app name in io k8s api core v1 ConfigMap ValidationError ConfigMap unknown field app version in io k8s api core v1 ConfigMap To see what rendered re run with disable openapi validation helm install dry run disable openapi validation measly whippet mychart The output will not be what we expect yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name measly whippet configmap labels app name mychart app version 0 1 0 data myvalue Hello World drink coffee food pizza app name mychart app version 0 1 0 Note that the indentation on app version is wrong in both places Why Because the template that is substituted in has the text aligned to the left Because template is an action and not a function there is no way to pass the output of a template call to other functions the data is simply inserted inline To work around this case Helm provides an alternative to template that will import the contents of a template into the present pipeline where it can be passed along to other functions in the pipeline Here s the example above corrected to use indent to indent the mychart app template correctly yaml apiVersion v1 kind ConfigMap metadata name configmap labels data myvalue Hello World Now the produced YAML is correctly indented for each section yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name edgy mole configmap labels app name mychart app version 0 1 0 data myvalue Hello World drink coffee food pizza app name mychart app version 0 1 0 It is considered preferable to use include over template in Helm templates simply so that the output formatting can be handled better for YAML documents Sometimes we want to import content but not as templates That is we want to import files verbatim We can achieve this by accessing files through the Files object described in the next section |
helm In the previous section we looked at the built in objects that Helm templates Instructions on how to use the values flag title Values Files weight 4 offer One of the built in objects is This object provides access to values passed into the chart Its contents come from multiple sources | ---
title: "Values Files"
description: "Instructions on how to use the --values flag."
weight: 4
---
In the previous section we looked at the built-in objects that Helm templates
offer. One of the built-in objects is `Values`. This object provides access to
values passed into the chart. Its contents come from multiple sources:
- The `values.yaml` file in the chart
- If this is a subchart, the `values.yaml` file of a parent chart
- A values file is passed into `helm install` or `helm upgrade` with the `-f`
flag (`helm install -f myvals.yaml ./mychart`)
- Individual parameters are passed with `--set` (such as `helm install --set foo=bar
./mychart`)
The list above is in order of specificity: `values.yaml` is the default, which
can be overridden by a parent chart's `values.yaml`, which can in turn be
overridden by a user-supplied values file, which can in turn be overridden by
`--set` parameters.
Values files are plain YAML files. Let's edit `mychart/values.yaml` and then
edit our ConfigMap template.
Removing the defaults in `values.yaml`, we'll set just one parameter:
```yaml
favoriteDrink: coffee
```
Now we can use this inside of a template:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
```
Notice on the last line we access `favoriteDrink` as an attribute of `Values`:
``.
Let's see how this renders.
```console
$ helm install geared-marsupi ./mychart --dry-run --debug
install.go:158: [debug] Original chart version: ""
install.go:175: [debug] CHART PATH: /home/bagratte/src/playground/mychart
NAME: geared-marsupi
LAST DEPLOYED: Wed Feb 19 23:21:13 2020
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
favoriteDrink: coffee
HOOKS:
MANIFEST:
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: geared-marsupi-configmap
data:
myvalue: "Hello World"
drink: coffee
```
Because `favoriteDrink` is set in the default `values.yaml` file to `coffee`,
that's the value displayed in the template. We can easily override that by
adding a `--set` flag in our call to `helm install`:
```console
$ helm install solid-vulture ./mychart --dry-run --debug --set favoriteDrink=slurm
install.go:158: [debug] Original chart version: ""
install.go:175: [debug] CHART PATH: /home/bagratte/src/playground/mychart
NAME: solid-vulture
LAST DEPLOYED: Wed Feb 19 23:25:54 2020
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
favoriteDrink: slurm
COMPUTED VALUES:
favoriteDrink: slurm
HOOKS:
MANIFEST:
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: solid-vulture-configmap
data:
myvalue: "Hello World"
drink: slurm
```
Since `--set` has a higher precedence than the default `values.yaml` file, our
template generates `drink: slurm`.
Values files can contain more structured content, too. For example, we could
create a `favorite` section in our `values.yaml` file, and then add several keys
there:
```yaml
favorite:
drink: coffee
food: pizza
```
Now we would have to modify the template slightly:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
```
While structuring data this way is possible, the recommendation is that you keep
your values trees shallow, favoring flatness. When we look at assigning values
to subcharts, we'll see how values are named using a tree structure.
## Deleting a default key
If you need to delete a key from the default values, you may override the value
of the key to be `null`, in which case Helm will remove the key from the
overridden values merge.
For example, the stable Drupal chart allows configuring the liveness probe, in
case you configure a custom image. Here are the default values:
```yaml
livenessProbe:
httpGet:
path: /user/login
port: http
initialDelaySeconds: 120
```
If you try to override the livenessProbe handler to `exec` instead of `httpGet`
using `--set livenessProbe.exec.command=[cat,docroot/CHANGELOG.txt]`, Helm will
coalesce the default and overridden keys together, resulting in the following
YAML:
```yaml
livenessProbe:
httpGet:
path: /user/login
port: http
exec:
command:
- cat
- docroot/CHANGELOG.txt
initialDelaySeconds: 120
```
However, Kubernetes would then fail because you can not declare more than one
livenessProbe handler. To overcome this, you may instruct Helm to delete the
`livenessProbe.httpGet` by setting it to null:
```sh
helm install stable/drupal --set image=my-registry/drupal:0.1.0 --set livenessProbe.exec.command=[cat,docroot/CHANGELOG.txt] --set livenessProbe.httpGet=null
```
At this point, we've seen several built-in objects, and used them to inject
information into a template. Now we will take a look at another aspect of the
template engine: functions and pipelines. | helm | title Values Files description Instructions on how to use the values flag weight 4 In the previous section we looked at the built in objects that Helm templates offer One of the built in objects is Values This object provides access to values passed into the chart Its contents come from multiple sources The values yaml file in the chart If this is a subchart the values yaml file of a parent chart A values file is passed into helm install or helm upgrade with the f flag helm install f myvals yaml mychart Individual parameters are passed with set such as helm install set foo bar mychart The list above is in order of specificity values yaml is the default which can be overridden by a parent chart s values yaml which can in turn be overridden by a user supplied values file which can in turn be overridden by set parameters Values files are plain YAML files Let s edit mychart values yaml and then edit our ConfigMap template Removing the defaults in values yaml we ll set just one parameter yaml favoriteDrink coffee Now we can use this inside of a template yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink Notice on the last line we access favoriteDrink as an attribute of Values Let s see how this renders console helm install geared marsupi mychart dry run debug install go 158 debug Original chart version install go 175 debug CHART PATH home bagratte src playground mychart NAME geared marsupi LAST DEPLOYED Wed Feb 19 23 21 13 2020 NAMESPACE default STATUS pending install REVISION 1 TEST SUITE None USER SUPPLIED VALUES COMPUTED VALUES favoriteDrink coffee HOOKS MANIFEST Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name geared marsupi configmap data myvalue Hello World drink coffee Because favoriteDrink is set in the default values yaml file to coffee that s the value displayed in the template We can easily override that by adding a set flag in our call to helm install console helm install solid vulture mychart dry run debug set favoriteDrink slurm install go 158 debug Original chart version install go 175 debug CHART PATH home bagratte src playground mychart NAME solid vulture LAST DEPLOYED Wed Feb 19 23 25 54 2020 NAMESPACE default STATUS pending install REVISION 1 TEST SUITE None USER SUPPLIED VALUES favoriteDrink slurm COMPUTED VALUES favoriteDrink slurm HOOKS MANIFEST Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name solid vulture configmap data myvalue Hello World drink slurm Since set has a higher precedence than the default values yaml file our template generates drink slurm Values files can contain more structured content too For example we could create a favorite section in our values yaml file and then add several keys there yaml favorite drink coffee food pizza Now we would have to modify the template slightly yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food While structuring data this way is possible the recommendation is that you keep your values trees shallow favoring flatness When we look at assigning values to subcharts we ll see how values are named using a tree structure Deleting a default key If you need to delete a key from the default values you may override the value of the key to be null in which case Helm will remove the key from the overridden values merge For example the stable Drupal chart allows configuring the liveness probe in case you configure a custom image Here are the default values yaml livenessProbe httpGet path user login port http initialDelaySeconds 120 If you try to override the livenessProbe handler to exec instead of httpGet using set livenessProbe exec command cat docroot CHANGELOG txt Helm will coalesce the default and overridden keys together resulting in the following YAML yaml livenessProbe httpGet path user login port http exec command cat docroot CHANGELOG txt initialDelaySeconds 120 However Kubernetes would then fail because you can not declare more than one livenessProbe handler To overcome this you may instruct Helm to delete the livenessProbe httpGet by setting it to null sh helm install stable drupal set image my registry drupal 0 1 0 set livenessProbe exec command cat docroot CHANGELOG txt set livenessProbe httpGet null At this point we ve seen several built in objects and used them to inject information into a template Now we will take a look at another aspect of the template engine functions and pipelines |
helm weight 6 They are listed here and broken down by the following categories A list of template functions available in Helm Helm includes many template functions you can take advantage of in templates title Template Function List | ---
title: "Template Function List"
description: "A list of template functions available in Helm"
weight: 6
---
Helm includes many template functions you can take advantage of in templates.
They are listed here and broken down by the following categories:
* [Cryptographic and Security](#cryptographic-and-security-functions)
* [Date](#date-functions)
* [Dictionaries](#dictionaries-and-dict-functions)
* [Encoding](#encoding-functions)
* [File Path](#file-path-functions)
* [Kubernetes and Chart](#kubernetes-and-chart-functions)
* [Logic and Flow Control](#logic-and-flow-control-functions)
* [Lists](#lists-and-list-functions)
* [Math](#math-functions)
* [Float Math](#float-math-functions)
* [Network](#network-functions)
* [Reflection](#reflection-functions)
* [Regular Expressions](#regular-expressions)
* [Semantic Versions](#semantic-version-functions)
* [String](#string-functions)
* [Type Conversion](#type-conversion-functions)
* [URL](#url-functions)
* [UUID](#uuid-functions)
## Logic and Flow Control Functions
Helm includes numerous logic and control flow functions including [and](#and),
[coalesce](#coalesce), [default](#default), [empty](#empty), [eq](#eq),
[fail](#fail), [ge](#ge), [gt](#gt), [le](#le), [lt](#lt), [ne](#ne),
[not](#not), [or](#or), and [required](#required).
### and
Returns the boolean AND of two or more arguments
(the first empty argument, or the last argument).
```
and .Arg1 .Arg2
```
### or
Returns the boolean OR of two or more arguments
(the first non-empty argument, or the last argument).
```
or .Arg1 .Arg2
```
### not
Returns the boolean negation of its argument.
```
not .Arg
```
### eq
Returns the boolean equality of the arguments (e.g., Arg1 == Arg2).
```
eq .Arg1 .Arg2
```
### ne
Returns the boolean inequality of the arguments (e.g., Arg1 != Arg2)
```
ne .Arg1 .Arg2
```
### lt
Returns a boolean true if the first argument is less than the second. False is
returned otherwise (e.g., Arg1 < Arg2).
```
lt .Arg1 .Arg2
```
### le
Returns a boolean true if the first argument is less than or equal to the
second. False is returned otherwise (e.g., Arg1 <= Arg2).
```
le .Arg1 .Arg2
```
### gt
Returns a boolean true if the first argument is greater than the second. False
is returned otherwise (e.g., Arg1 > Arg2).
```
gt .Arg1 .Arg2
```
### ge
Returns a boolean true if the first argument is greater than or equal to the
second. False is returned otherwise (e.g., Arg1 >= Arg2).
```
ge .Arg1 .Arg2
```
### default
To set a simple default value, use `default`:
```
default "foo" .Bar
```
In the above, if `.Bar` evaluates to a non-empty value, it will be used. But if
it is empty, `foo` will be returned instead.
The definition of "empty" depends on type:
- Numeric: 0
- String: ""
- Lists: `[]`
- Dicts: `{}`
- Boolean: `false`
- And always `nil` (aka null)
For structs, there is no definition of empty, so a struct will never return the
default.
### required
Specify values that must be set with `required`:
```
required "A valid foo is required!" .Bar
```
If `.Bar` is empty or not defined (see [default](#default) on how this is
evaluated), the template will not render and will return the error message
supplied instead.
### empty
The `empty` function returns `true` if the given value is considered empty, and
`false` otherwise. The empty values are listed in the `default` section.
```
empty .Foo
```
Note that in Go template conditionals, emptiness is calculated for you. Thus,
you rarely need `if not empty .Foo`. Instead, just use `if .Foo`.
### fail
Unconditionally returns an empty `string` and an `error` with the specified
text. This is useful in scenarios where other conditionals have determined that
template rendering should fail.
```
fail "Please accept the end user license agreement"
```
### coalesce
The `coalesce` function takes a list of values and returns the first non-empty
one.
```
coalesce 0 1 2
```
The above returns `1`.
This function is useful for scanning through multiple variables or values:
```
coalesce .name .parent.name "Matt"
```
The above will first check to see if `.name` is empty. If it is not, it will
return that value. If it _is_ empty, `coalesce` will evaluate `.parent.name` for
emptiness. Finally, if both `.name` and `.parent.name` are empty, it will return
`Matt`.
### ternary
The `ternary` function takes two values, and a test value. If the test value is
true, the first value will be returned. If the test value is empty, the second
value will be returned. This is similar to the ternary operator in C and other programming languages.
#### true test value
```
ternary "foo" "bar" true
```
or
```
true | ternary "foo" "bar"
```
The above returns `"foo"`.
#### false test value
```
ternary "foo" "bar" false
```
or
```
false | ternary "foo" "bar"
```
The above returns `"bar"`.
## String Functions
Helm includes the following string functions: [abbrev](#abbrev),
[abbrevboth](#abbrevboth), [camelcase](#camelcase), [cat](#cat),
[contains](#contains), [hasPrefix](#hasprefix-and-hassuffix),
[hasSuffix](#hasprefix-and-hassuffix), [indent](#indent), [initials](#initials),
[kebabcase](#kebabcase), [lower](#lower), [nindent](#nindent),
[nospace](#nospace), [plural](#plural), [print](#print), [printf](#printf),
[println](#println), [quote](#quote-and-squote),
[randAlpha](#randalphanum-randalpha-randnumeric-and-randascii),
[randAlphaNum](#randalphanum-randalpha-randnumeric-and-randascii),
[randAscii](#randalphanum-randalpha-randnumeric-and-randascii),
[randNumeric](#randalphanum-randalpha-randnumeric-and-randascii),
[repeat](#repeat), [replace](#replace), [shuffle](#shuffle),
[snakecase](#snakecase), [squote](#quote-and-squote), [substr](#substr),
[swapcase](#swapcase), [title](#title), [trim](#trim), [trimAll](#trimall),
[trimPrefix](#trimprefix), [trimSuffix](#trimsuffix), [trunc](#trunc),
[untitle](#untitle), [upper](#upper), [wrap](#wrap), and [wrapWith](#wrapwith).
### print
Returns a string from the combination of its parts.
```
print "Matt has " .Dogs " dogs"
```
Types that are not strings are converted to strings where possible.
Note, when two arguments next to each other are not strings a space is added
between them.
### println
Works the same way as [print](#print) but adds a new line at the end.
### printf
Returns a string based on a formatting string and the arguments to pass to it in
order.
```
printf "%s has %d dogs." .Name .NumberDogs
```
The placeholder to use depends on the type for the argument being passed in.
This includes:
General purpose:
* `%v` the value in a default format
* when printing dicts, the plus flag (%+v) adds field names
* `%%` a literal percent sign; consumes no value
Boolean:
* `%t` the word true or false
Integer:
* `%b` base 2
* `%c` the character represented by the corresponding Unicode code point
* `%d` base 10
* `%o` base 8
* `%O` base 8 with 0o prefix
* `%q` a single-quoted character literal safely escaped
* `%x` base 16, with lower-case letters for a-f
* `%X` base 16, with upper-case letters for A-F
* `%U` Unicode format: U+1234; same as "U+%04X"
Floating-point and complex constituents:
* `%b` decimal less scientific notation with exponent a power of two, e.g.
-123456p-78
* `%e` scientific notation, e.g. -1.234456e+78
* `%E` scientific notation, e.g. -1.234456E+78
* `%f` decimal point but no exponent, e.g. 123.456
* `%F` synonym for %f
* `%g` %e for large exponents, %f otherwise.
* `%G` %E for large exponents, %F otherwise
* `%x` hexadecimal notation (with decimal power of two exponent), e.g.
-0x1.23abcp+20
* `%X` upper-case hexadecimal notation, e.g. -0X1.23ABCP+20
String and slice of bytes (treated equivalently with these verbs):
* `%s` the uninterpreted bytes of the string or slice
* `%q` a double-quoted string safely escaped
* `%x` base 16, lower-case, two characters per byte
* `%X` base 16, upper-case, two characters per byte
Slice:
* `%p` address of 0th element in base 16 notation, with leading 0x
### trim
The `trim` function removes white space from both sides of a string:
```
trim " hello "
```
The above produces `hello`
### trimAll
Removes the given characters from the front and back of a string:
```
trimAll "$" "$5.00"
```
The above returns `5.00` (as a string).
### trimPrefix
Trim just the prefix from a string:
```
trimPrefix "-" "-hello"
```
The above returns `hello`
### trimSuffix
Trim just the suffix from a string:
```
trimSuffix "-" "hello-"
```
The above returns `hello`
### lower
Convert the entire string to lowercase:
```
lower "HELLO"
```
The above returns `hello`
### upper
Convert the entire string to uppercase:
```
upper "hello"
```
The above returns `HELLO`
### title
Convert to title case:
```
title "hello world"
```
The above returns `Hello World`
### untitle
Remove title casing. `untitle "Hello World"` produces `hello world`.
### repeat
Repeat a string multiple times:
```
repeat 3 "hello"
```
The above returns `hellohellohello`
### substr
Get a substring from a string. It takes three parameters:
- start (int)
- end (int)
- string (string)
```
substr 0 5 "hello world"
```
The above returns `hello`
### nospace
Remove all whitespace from a string.
```
nospace "hello w o r l d"
```
The above returns `helloworld`
### trunc
Truncate a string
```
trunc 5 "hello world"
```
The above produces `hello`.
```
trunc -5 "hello world"
```
The above produces `world`.
### abbrev
Truncate a string with ellipses (`...`)
Parameters:
- max length
- the string
```
abbrev 5 "hello world"
```
The above returns `he...`, since it counts the width of the ellipses against the
maximum length.
### abbrevboth
Abbreviate both sides:
```
abbrevboth 5 10 "1234 5678 9123"
```
the above produces `...5678...`
It takes:
- left offset
- max length
- the string
### initials
Given multiple words, take the first letter of each word and combine.
```
initials "First Try"
```
The above returns `FT`
### randAlphaNum, randAlpha, randNumeric, and randAscii
These four functions generate cryptographically secure (uses ```crypto/rand```)
random strings, but with different base character sets:
- `randAlphaNum` uses `0-9a-zA-Z`
- `randAlpha` uses `a-zA-Z`
- `randNumeric` uses `0-9`
- `randAscii` uses all printable ASCII characters
Each of them takes one parameter: the integer length of the string.
```
randNumeric 3
```
The above will produce a random string with three digits.
### wrap
Wrap text at a given column count:
```
wrap 80 $someText
```
The above will wrap the string in `$someText` at 80 columns.
### wrapWith
`wrapWith` works as `wrap`, but lets you specify the string to wrap with.
(`wrap` uses `\n`)
```
wrapWith 5 "\t" "Hello World"
```
The above produces `Hello World` (where the whitespace is an ASCII tab
character)
### contains
Test to see if one string is contained inside of another:
```
contains "cat" "catch"
```
The above returns `true` because `catch` contains `cat`.
### hasPrefix and hasSuffix
The `hasPrefix` and `hasSuffix` functions test whether a string has a given
prefix or suffix:
```
hasPrefix "cat" "catch"
```
The above returns `true` because `catch` has the prefix `cat`.
### quote and squote
These functions wrap a string in double quotes (`quote`) or single quotes
(`squote`).
### cat
The `cat` function concatenates multiple strings together into one, separating
them with spaces:
```
cat "hello" "beautiful" "world"
```
The above produces `hello beautiful world`
### indent
The `indent` function indents every line in a given string to the specified
indent width. This is useful when aligning multi-line strings:
```
indent 4 $lots_of_text
```
The above will indent every line of text by 4 space characters.
### nindent
The `nindent` function is the same as the indent function, but prepends a new
line to the beginning of the string.
```
nindent 4 $lots_of_text
```
The above will indent every line of text by 4 space characters and add a new
line to the beginning.
### replace
Perform simple string replacement.
It takes three arguments:
- string to replace
- string to replace with
- source string
```
"I Am Henry VIII" | replace " " "-"
```
The above will produce `I-Am-Henry-VIII`
### plural
Pluralize a string.
```
len $fish | plural "one anchovy" "many anchovies"
```
In the above, if the length of the string is 1, the first argument will be
printed (`one anchovy`). Otherwise, the second argument will be printed (`many
anchovies`).
The arguments are:
- singular string
- plural string
- length integer
NOTE: Helm does not currently support languages with more complex pluralization
rules. And `0` is considered a plural because the English language treats it as
such (`zero anchovies`).
### snakecase
Convert string from camelCase to snake_case.
```
snakecase "FirstName"
```
This above will produce `first_name`.
### camelcase
Convert string from snake_case to CamelCase
```
camelcase "http_server"
```
This above will produce `HttpServer`.
### kebabcase
Convert string from camelCase to kebab-case.
```
kebabcase "FirstName"
```
This above will produce `first-name`.
### swapcase
Swap the case of a string using a word based algorithm.
Conversion algorithm:
- Upper case character converts to Lower case
- Title case character converts to Lower case
- Lower case character after Whitespace or at start converts to Title case
- Other Lower case character converts to Upper case
- Whitespace is defined by unicode.IsSpace(char)
```
swapcase "This Is A.Test"
```
This above will produce `tHIS iS a.tEST`.
### shuffle
Shuffle a string.
```
shuffle "hello"
```
The above will randomize the letters in `hello`, perhaps producing `oelhl`.
## Type Conversion Functions
The following type conversion functions are provided by Helm:
- `atoi`: Convert a string to an integer.
- `float64`: Convert to a `float64`.
- `int`: Convert to an `int` at the system's width.
- `int64`: Convert to an `int64`.
- `toDecimal`: Convert a unix octal to a `int64`.
- `toString`: Convert to a string.
- `toStrings`: Convert a list, slice, or array to a list of strings.
- `toJson` (`mustToJson`): Convert list, slice, array, dict, or object to JSON.
- `toPrettyJson` (`mustToPrettyJson`): Convert list, slice, array, dict, or
object to indented JSON.
- `toRawJson` (`mustToRawJson`): Convert list, slice, array, dict, or object to
JSON with HTML characters unescaped.
- `fromYaml`: Convert a YAML string to an object.
- `fromJson`: Convert a JSON string to an object.
- `fromJsonArray`: Convert a JSON array to a list.
- `toYaml`: Convert list, slice, array, dict, or object to indented yaml, can be used to copy chunks of yaml from any source. This function is equivalent to GoLang yaml.Marshal function, see docs here: https://pkg.go.dev/gopkg.in/yaml.v2#Marshal
- `toToml`: Convert list, slice, array, dict, or object to toml, can be used to copy chunks of toml from any source.
- `fromYamlArray`: Convert a YAML array to a list.
Only `atoi` requires that the input be a specific type. The others will attempt
to convert from any type to the destination type. For example, `int64` can
convert floats to ints, and it can also convert strings to ints.
### toStrings
Given a list-like collection, produce a slice of strings.
```
list 1 2 3 | toStrings
```
The above converts `1` to `"1"`, `2` to `"2"`, and so on, and then returns them
as a list.
### toDecimal
Given a unix octal permission, produce a decimal.
```
"0777" | toDecimal
```
The above converts `0777` to `511` and returns the value as an int64.
### toJson, mustToJson
The `toJson` function encodes an item into a JSON string. If the item cannot be
converted to JSON the function will return an empty string. `mustToJson` will
return an error in case the item cannot be encoded in JSON.
```
toJson .Item
```
The above returns JSON string representation of `.Item`.
### toPrettyJson, mustToPrettyJson
The `toPrettyJson` function encodes an item into a pretty (indented) JSON
string.
```
toPrettyJson .Item
```
The above returns indented JSON string representation of `.Item`.
### toRawJson, mustToRawJson
The `toRawJson` function encodes an item into JSON string with HTML characters
unescaped.
```
toRawJson .Item
```
The above returns unescaped JSON string representation of `.Item`.
### fromYaml
The `fromYaml` function takes a YAML string and returns an object that can be used in templates.
`File at: yamls/person.yaml`
```yaml
name: Bob
age: 25
hobbies:
- hiking
- fishing
- cooking
```
```yaml
greeting: |
Hi, my name is and I am years old.
My hobbies are .
```
### fromJson
The `fromJson` function takes a JSON string and returns an object that can be used in templates.
`File at: jsons/person.json`
```json
{
"name": "Bob",
"age": 25,
"hobbies": [
"hiking",
"fishing",
"cooking"
]
}
```
```yaml
greeting: |
Hi, my name is and I am years old.
My hobbies are .
```
### fromJsonArray
The `fromJsonArray` function takes a JSON Array and returns a list that can be used in templates.
`File at: jsons/people.json`
```json
[
{ "name": "Bob","age": 25 },
{ "name": "Ram","age": 16 }
]
```
```yaml
greeting: |
Hi, my name is and I am years old.
```
### fromYamlArray
The `fromYamlArray` function takes a YAML Array and returns a list that can be used in templates.
`File at: yamls/people.yml`
```yaml
- name: Bob
age: 25
- name: Ram
age: 16
```
```yaml
greeting: |
Hi, my name is and I am years old.
```
## Regular Expressions
Helm includes the following regular expression functions: [regexFind
(mustRegexFind)](#regexfindall-mustregexfindall), [regexFindAll
(mustRegexFindAll)](#regexfind-mustregexfind), [regexMatch
(mustRegexMatch)](#regexmatch-mustregexmatch), [regexReplaceAll
(mustRegexReplaceAll)](#regexreplaceall-mustregexreplaceall),
[regexReplaceAllLiteral
(mustRegexReplaceAllLiteral)](#regexreplaceallliteral-mustregexreplaceallliteral),
[regexSplit (mustRegexSplit)](#regexsplit-mustregexsplit).
### regexMatch, mustRegexMatch
Returns true if the input string contains any match of the regular expression.
```
regexMatch "^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}$" "[email protected]"
```
The above produces `true`
`regexMatch` panics if there is a problem and `mustRegexMatch` returns an error
to the template engine if there is a problem.
### regexFindAll, mustRegexFindAll
Returns a slice of all matches of the regular expression in the input string.
The last parameter n determines the number of substrings to return, where -1
means return all matches
```
regexFindAll "[2,4,6,8]" "123456789" -1
```
The above produces `[2 4 6 8]`
`regexFindAll` panics if there is a problem and `mustRegexFindAll` returns an
error to the template engine if there is a problem.
### regexFind, mustRegexFind
Return the first (left most) match of the regular expression in the input string
```
regexFind "[a-zA-Z][1-9]" "abcd1234"
```
The above produces `d1`
`regexFind` panics if there is a problem and `mustRegexFind` returns an error to
the template engine if there is a problem.
### regexReplaceAll, mustRegexReplaceAll
Returns a copy of the input string, replacing matches of the Regexp with the
replacement string replacement. Inside string replacement, $ signs are
interpreted as in Expand, so for instance $1 represents the text of the first
submatch
```
regexReplaceAll "a(x*)b" "-ab-axxb-" "${1}W"
```
The above produces `-W-xxW-`
`regexReplaceAll` panics if there is a problem and `mustRegexReplaceAll` returns
an error to the template engine if there is a problem.
### regexReplaceAllLiteral, mustRegexReplaceAllLiteral
Returns a copy of the input string, replacing matches of the Regexp with the
replacement string replacement. The replacement string is substituted directly,
without using Expand
```
regexReplaceAllLiteral "a(x*)b" "-ab-axxb-" "${1}"
```
The above produces `-${1}-${1}-`
`regexReplaceAllLiteral` panics if there is a problem and
`mustRegexReplaceAllLiteral` returns an error to the template engine if there is
a problem.
### regexSplit, mustRegexSplit
Slices the input string into substrings separated by the expression and returns
a slice of the substrings between those expression matches. The last parameter
`n` determines the number of substrings to return, where `-1` means return all
matches
```
regexSplit "z+" "pizza" -1
```
The above produces `[pi a]`
`regexSplit` panics if there is a problem and `mustRegexSplit` returns an error
to the template engine if there is a problem.
## Cryptographic and Security Functions
Helm provides some advanced cryptographic functions. They include
[adler32sum](#adler32sum), [buildCustomCert](#buildcustomcert),
[decryptAES](#decryptaes), [derivePassword](#derivepassword),
[encryptAES](#encryptaes), [genCA](#genca), [genPrivateKey](#genprivatekey),
[genSelfSignedCert](#genselfsignedcert), [genSignedCert](#gensignedcert),
[htpasswd](#htpasswd), [sha1sum](#sha1sum), and [sha256sum](#sha256sum).
### sha1sum
The `sha1sum` function receives a string, and computes it's SHA1 digest.
```
sha1sum "Hello world!"
```
### sha256sum
The `sha256sum` function receives a string, and computes it's SHA256 digest.
```
sha256sum "Hello world!"
```
The above will compute the SHA 256 sum in an "ASCII armored" format that is safe
to print.
### adler32sum
The `adler32sum` function receives a string, and computes its Adler-32 checksum.
```
adler32sum "Hello world!"
```
### htpasswd
The `htpasswd` function takes a `username` and `password` and generates a
`bcrypt` hash of the password. The result can be used for basic authentication
on an [Apache HTTP
Server](https://httpd.apache.org/docs/2.4/misc/password_encryptions.html#basic).
```
htpasswd "myUser" "myPassword"
```
Note that it is insecure to store the password directly in the template.
### derivePassword
The `derivePassword` function can be used to derive a specific password based on
some shared "master password" constraints. The algorithm for this is [well
specified](https://web.archive.org/web/20211019121301/https://masterpassword.app/masterpassword-algorithm.pdf).
```
derivePassword 1 "long" "password" "user" "example.com"
```
Note that it is considered insecure to store the parts directly in the template.
### genPrivateKey
The `genPrivateKey` function generates a new private key encoded into a PEM
block.
It takes one of the values for its first param:
- `ecdsa`: Generate an elliptic curve DSA key (P256)
- `dsa`: Generate a DSA key (L2048N256)
- `rsa`: Generate an RSA 4096 key
### buildCustomCert
The `buildCustomCert` function allows customizing the certificate.
It takes the following string parameters:
- A base64 encoded PEM format certificate
- A base64 encoded PEM format private key
It returns a certificate object with the following attributes:
- `Cert`: A PEM-encoded certificate
- `Key`: A PEM-encoded private key
Example:
```
$ca := buildCustomCert "base64-encoded-ca-crt" "base64-encoded-ca-key"
```
Note that the returned object can be passed to the `genSignedCert` function to
sign a certificate using this CA.
### genCA
The `genCA` function generates a new, self-signed x509 certificate authority.
It takes the following parameters:
- Subject's common name (cn)
- Cert validity duration in days
It returns an object with the following attributes:
- `Cert`: A PEM-encoded certificate
- `Key`: A PEM-encoded private key
Example:
```
$ca := genCA "foo-ca" 365
```
Note that the returned object can be passed to the `genSignedCert` function to
sign a certificate using this CA.
### genSelfSignedCert
The `genSelfSignedCert` function generates a new, self-signed x509 certificate.
It takes the following parameters:
- Subject's common name (cn)
- Optional list of IPs; may be nil
- Optional list of alternate DNS names; may be nil
- Cert validity duration in days
It returns an object with the following attributes:
- `Cert`: A PEM-encoded certificate
- `Key`: A PEM-encoded private key
Example:
```
$cert := genSelfSignedCert "foo.com" (list "10.0.0.1" "10.0.0.2") (list "bar.com" "bat.com") 365
```
### genSignedCert
The `genSignedCert` function generates a new, x509 certificate signed by the
specified CA.
It takes the following parameters:
- Subject's common name (cn)
- Optional list of IPs; may be nil
- Optional list of alternate DNS names; may be nil
- Cert validity duration in days
- CA (see `genCA`)
Example:
```
$ca := genCA "foo-ca" 365
$cert := genSignedCert "foo.com" (list "10.0.0.1" "10.0.0.2") (list "bar.com" "bat.com") 365 $ca
```
### encryptAES
The `encryptAES` function encrypts text with AES-256 CBC and returns a base64
encoded string.
```
encryptAES "secretkey" "plaintext"
```
### decryptAES
The `decryptAES` function receives a base64 string encoded by the AES-256 CBC
algorithm and returns the decoded text.
```
"30tEfhuJSVRhpG97XCuWgz2okj7L8vQ1s6V9zVUPeDQ=" | decryptAES "secretkey"
```
## Date Functions
Helm includes the following date functions you can use in templates:
[ago](#ago), [date](#date), [dateInZone](#dateinzone), [dateModify
(mustDateModify)](#datemodify-mustdatemodify), [duration](#duration),
[durationRound](#durationround), [htmlDate](#htmldate),
[htmlDateInZone](#htmldateinzone), [now](#now), [toDate
(mustToDate)](#todate-musttodate), and [unixEpoch](#unixepoch).
### now
The current date/time. Use this in conjunction with other date functions.
### ago
The `ago` function returns duration from time. Now in seconds resolution.
```
ago .CreatedAt
```
returns in `time.Duration` String() format
```
2h34m7s
```
### date
The `date` function formats a date.
Format the date to YEAR-MONTH-DAY:
```
now | date "2006-01-02"
```
Date formatting in Go is a [little bit
different](https://pauladamsmith.com/blog/2011/05/go_time.html).
In short, take this as the base date:
```
Mon Jan 2 15:04:05 MST 2006
```
Write it in the format you want. Above, `2006-01-02` is the same date, but in
the format we want.
### dateInZone
Same as `date`, but with a timezone.
```
dateInZone "2006-01-02" (now) "UTC"
```
### duration
Formats a given amount of seconds as a `time.Duration`.
This returns 1m35s
```
duration "95"
```
### durationRound
Rounds a given duration to the most significant unit. Strings and
`time.Duration` gets parsed as a duration, while a `time.Time` is calculated as
the duration since.
This return 2h
```
durationRound "2h10m5s"
```
This returns 3mo
```
durationRound "2400h10m5s"
```
### unixEpoch
Returns the seconds since the unix epoch for a `time.Time`.
```
now | unixEpoch
```
### dateModify, mustDateModify
The `dateModify` takes a modification and a date and returns the timestamp.
Subtract an hour and thirty minutes from the current time:
```
now | dateModify "-1.5h"
```
If the modification format is wrong `dateModify` will return the date
unmodified. `mustDateModify` will return an error otherwise.
### htmlDate
The `htmlDate` function formats a date for inserting into an HTML date picker
input field.
```
now | htmlDate
```
### htmlDateInZone
Same as htmlDate, but with a timezone.
```
htmlDateInZone (now) "UTC"
```
### toDate, mustToDate
`toDate` converts a string to a date. The first argument is the date layout and
the second the date string. If the string can't be convert it returns the zero
value. `mustToDate` will return an error in case the string cannot be converted.
This is useful when you want to convert a string date to another format (using
pipe). The example below converts "2017-12-31" to "31/12/2017".
```
toDate "2006-01-02" "2017-12-31" | date "02/01/2006"
```
## Dictionaries and Dict Functions
Helm provides a key/value storage type called a `dict` (short for "dictionary",
as in Python). A `dict` is an _unordered_ type.
The key to a dictionary **must be a string**. However, the value can be any
type, even another `dict` or `list`.
Unlike `list`s, `dict`s are not immutable. The `set` and `unset` functions will
modify the contents of a dictionary.
Helm provides the following functions to support working with dicts: [deepCopy
(mustDeepCopy)](#deepcopy-mustdeepcopy), [dict](#dict), [dig](#dig), [get](#get),
[hasKey](#haskey), [keys](#keys), [merge (mustMerge)](#merge-mustmerge),
[mergeOverwrite (mustMergeOverwrite)](#mergeoverwrite-mustmergeoverwrite),
[omit](#omit), [pick](#pick), [pluck](#pluck), [set](#set), [unset](#unset), and
[values](#values).
### dict
Creating dictionaries is done by calling the `dict` function and passing it a
list of pairs.
The following creates a dictionary with three items:
```
$myDict := dict "name1" "value1" "name2" "value2" "name3" "value 3"
```
### get
Given a map and a key, get the value from the map.
```
get $myDict "name1"
```
The above returns `"value1"`
Note that if the key is not found, this operation will simply return `""`. No
error will be generated.
### set
Use `set` to add a new key/value pair to a dictionary.
```
$_ := set $myDict "name4" "value4"
```
Note that `set` _returns the dictionary_ (a requirement of Go template
functions), so you may need to trap the value as done above with the `$_`
assignment.
### unset
Given a map and a key, delete the key from the map.
```
$_ := unset $myDict "name4"
```
As with `set`, this returns the dictionary.
Note that if the key is not found, this operation will simply return. No error
will be generated.
### hasKey
The `hasKey` function returns `true` if the given dict contains the given key.
```
hasKey $myDict "name1"
```
If the key is not found, this returns `false`.
### pluck
The `pluck` function makes it possible to give one key and multiple maps, and
get a list of all of the matches:
```
pluck "name1" $myDict $myOtherDict
```
The above will return a `list` containing every found value (`[value1
otherValue1]`).
If the given key is _not found_ in a map, that map will not have an item in the
list (and the length of the returned list will be less than the number of dicts
in the call to `pluck`).
If the key is _found_ but the value is an empty value, that value will be
inserted.
A common idiom in Helm templates is to use `pluck... | first` to get the first
matching key out of a collection of dictionaries.
### dig
The `dig` function traverses a nested set of dicts, selecting keys from a list
of values. It returns a default value if any of the keys are not found at the
associated dict.
```
dig "user" "role" "humanName" "guest" $dict
```
Given a dict structured like
```
{
user: {
role: {
humanName: "curator"
}
}
}
```
the above would return `"curator"`. If the dict lacked even a `user` field,
the result would be `"guest"`.
Dig can be very useful in cases where you'd like to avoid guard clauses,
especially since Go's template package's `and` doesn't shortcut. For instance
`and a.maybeNil a.maybeNil.iNeedThis` will always evaluate
`a.maybeNil.iNeedThis`, and panic if `a` lacks a `maybeNil` field.)
`dig` accepts its dict argument last in order to support pipelining. For instance:
```
merge a b c | dig "one" "two" "three" "<missing>"
```
### merge, mustMerge
Merge two or more dictionaries into one, giving precedence to the dest
dictionary:
Given:
```
dst:
default: default
overwrite: me
key: true
src:
overwrite: overwritten
key: false
```
will result in:
```
newdict:
default: default
overwrite: me
key: true
```
```
$newdict := merge $dest $source1 $source2
```
This is a deep merge operation but not a deep copy operation. Nested objects
that are merged are the same instance on both dicts. If you want a deep copy
along with the merge, then use the `deepCopy` function along with merging. For
example,
```
deepCopy $source | merge $dest
```
`mustMerge` will return an error in case of unsuccessful merge.
### mergeOverwrite, mustMergeOverwrite
Merge two or more dictionaries into one, giving precedence from **right to
left**, effectively overwriting values in the dest dictionary:
Given:
```
dst:
default: default
overwrite: me
key: true
src:
overwrite: overwritten
key: false
```
will result in:
```
newdict:
default: default
overwrite: overwritten
key: false
```
```
$newdict := mergeOverwrite $dest $source1 $source2
```
This is a deep merge operation but not a deep copy operation. Nested objects
that are merged are the same instance on both dicts. If you want a deep copy
along with the merge then use the `deepCopy` function along with merging. For
example,
```
deepCopy $source | mergeOverwrite $dest
```
`mustMergeOverwrite` will return an error in case of unsuccessful merge.
### keys
The `keys` function will return a `list` of all of the keys in one or more
`dict` types. Since a dictionary is _unordered_, the keys will not be in a
predictable order. They can be sorted with `sortAlpha`.
```
keys $myDict | sortAlpha
```
When supplying multiple dictionaries, the keys will be concatenated. Use the
`uniq` function along with `sortAlpha` to get a unique, sorted list of keys.
```
keys $myDict $myOtherDict | uniq | sortAlpha
```
### pick
The `pick` function selects just the given keys out of a dictionary, creating a
new `dict`.
```
$new := pick $myDict "name1" "name2"
```
The above returns `{name1: value1, name2: value2}`
### omit
The `omit` function is similar to `pick`, except it returns a new `dict` with
all the keys that _do not_ match the given keys.
```
$new := omit $myDict "name1" "name3"
```
The above returns `{name2: value2}`
### values
The `values` function is similar to `keys`, except it returns a new `list` with
all the values of the source `dict` (only one dictionary is supported).
```
$vals := values $myDict
```
The above returns `list["value1", "value2", "value 3"]`. Note that the `values`
function gives no guarantees about the result ordering; if you care about this,
then use `sortAlpha`.
### deepCopy, mustDeepCopy
The `deepCopy` and `mustDeepCopy` functions take a value and make a deep copy
of the value. This includes dicts and other structures. `deepCopy` panics when
there is a problem, while `mustDeepCopy` returns an error to the template system
when there is an error.
```
dict "a" 1 "b" 2 | deepCopy
```
### A Note on Dict Internals
A `dict` is implemented in Go as a `map[string]interface{}`. Go developers can
pass `map[string]interface{}` values into the context to make them available to
templates as `dict`s.
## Encoding Functions
Helm has the following encoding and decoding functions:
- `b64enc`/`b64dec`: Encode or decode with Base64
- `b32enc`/`b32dec`: Encode or decode with Base32
## Lists and List Functions
Helm provides a simple `list` type that can contain arbitrary sequential lists
of data. This is similar to arrays or slices, but lists are designed to be used
as immutable data types.
Create a list of integers:
```
$myList := list 1 2 3 4 5
```
The above creates a list of `[1 2 3 4 5]`.
Helm provides the following list functions: [append
(mustAppend)](#append-mustappend), [compact
(mustCompact)](#compact-mustcompact), [concat](#concat), [first
(mustFirst)](#first-mustfirst), [has (mustHas)](#has-musthas), [initial
(mustInitial)](#initial-mustinitial), [last (mustLast)](#last-mustlast),
[prepend (mustPrepend)](#prepend-mustprepend), [rest
(mustRest)](#rest-mustrest), [reverse (mustReverse)](#reverse-mustreverse),
[seq](#seq), [index](#index), [slice (mustSlice)](#slice-mustslice), [uniq
(mustUniq)](#uniq-mustuniq), [until](#until), [untilStep](#untilstep), and
[without (mustWithout)](#without-mustwithout).
### first, mustFirst
To get the head item on a list, use `first`.
`first $myList` returns `1`
`first` panics if there is a problem, while `mustFirst` returns an error to the
template engine if there is a problem.
### rest, mustRest
To get the tail of the list (everything but the first item), use `rest`.
`rest $myList` returns `[2 3 4 5]`
`rest` panics if there is a problem, while `mustRest` returns an error to the
template engine if there is a problem.
### last, mustLast
To get the last item on a list, use `last`:
`last $myList` returns `5`. This is roughly analogous to reversing a list and
then calling `first`.
### initial, mustInitial
This compliments `last` by returning all _but_ the last element. `initial
$myList` returns `[1 2 3 4]`.
`initial` panics if there is a problem, while `mustInitial` returns an error to
the template engine if there is a problem.
### append, mustAppend
Append a new item to an existing list, creating a new list.
```
$new = append $myList 6
```
The above would set `$new` to `[1 2 3 4 5 6]`. `$myList` would remain unaltered.
`append` panics if there is a problem, while `mustAppend` returns an error to the
template engine if there is a problem.
### prepend, mustPrepend
Push an element onto the front of a list, creating a new list.
```
prepend $myList 0
```
The above would produce `[0 1 2 3 4 5]`. `$myList` would remain unaltered.
`prepend` panics if there is a problem, while `mustPrepend` returns an error to
the template engine if there is a problem.
### concat
Concatenate arbitrary number of lists into one.
```
concat $myList ( list 6 7 ) ( list 8 )
```
The above would produce `[1 2 3 4 5 6 7 8]`. `$myList` would remain unaltered.
### reverse, mustReverse
Produce a new list with the reversed elements of the given list.
```
reverse $myList
```
The above would generate the list `[5 4 3 2 1]`.
`reverse` panics if there is a problem, while `mustReverse` returns an error to
the template engine if there is a problem.
### uniq, mustUniq
Generate a list with all of the duplicates removed.
```
list 1 1 1 2 | uniq
```
The above would produce `[1 2]`
`uniq` panics if there is a problem, while `mustUniq` returns an error to the
template engine if there is a problem.
### without, mustWithout
The `without` function filters items out of a list.
```
without $myList 3
```
The above would produce `[1 2 4 5]`
`without` can take more than one filter:
```
without $myList 1 3 5
```
That would produce `[2 4]`
`without` panics if there is a problem, while `mustWithout` returns an error to
the template engine if there is a problem.
### has, mustHas
Test to see if a list has a particular element.
```
has 4 $myList
```
The above would return `true`, while `has "hello" $myList` would return false.
`has` panics if there is a problem, while `mustHas` returns an error to the
template engine if there is a problem.
### compact, mustCompact
Accepts a list and removes entries with empty values.
```
$list := list 1 "a" "foo" ""
$copy := compact $list
```
`compact` will return a new list with the empty (i.e., "") item removed.
`compact` panics if there is a problem and `mustCompact` returns an error to the
template engine if there is a problem.
### index
To get the nth element of a list, use `index list [n]`. To index into
multi-dimensional lists, use `index list [n] [m] ...`
- `index $myList 0` returns `1`. It is the same as `myList[0]`
- `index $myList 0 1` would be the same as `myList[0][1]`
### slice, mustSlice
To get partial elements of a list, use `slice list [n] [m]`. It is equivalent of
`list[n:m]`.
- `slice $myList` returns `[1 2 3 4 5]`. It is same as `myList[:]`.
- `slice $myList 3` returns `[4 5]`. It is same as `myList[3:]`.
- `slice $myList 1 3` returns `[2 3]`. It is same as `myList[1:3]`.
- `slice $myList 0 3` returns `[1 2 3]`. It is same as `myList[:3]`.
`slice` panics if there is a problem, while `mustSlice` returns an error to the
template engine if there is a problem.
### until
The `until` function builds a range of integers.
```
until 5
```
The above generates the list `[0, 1, 2, 3, 4]`.
This is useful for looping with `range $i, $e := until 5`.
### untilStep
Like `until`, `untilStep` generates a list of counting integers. But it allows
you to define a start, stop, and step:
```
untilStep 3 6 2
```
The above will produce `[3 5]` by starting with 3, and adding 2 until it is
equal or greater than 6. This is similar to Python's `range` function.
### seq
Works like the bash `seq` command.
* 1 parameter (end) - will generate all counting integers between 1 and `end`
inclusive.
* 2 parameters (start, end) - will generate all counting integers between
`start` and `end` inclusive incrementing or decrementing by 1.
* 3 parameters (start, step, end) - will generate all counting integers between
`start` and `end` inclusive incrementing or decrementing by `step`.
```
seq 5 => 1 2 3 4 5
seq -3 => 1 0 -1 -2 -3
seq 0 2 => 0 1 2
seq 2 -2 => 2 1 0 -1 -2
seq 0 2 10 => 0 2 4 6 8 10
seq 0 -2 -5 => 0 -2 -4
```
## Math Functions
All math functions operate on `int64` values unless specified otherwise.
The following math functions are available: [add](#add), [add1](#add1),
[ceil](#ceil), [div](#div), [floor](#floor), [len](#len), [max](#max),
[min](#min), [mod](#mod), [mul](#mul), [round](#round), and [sub](#sub).
### add
Sum numbers with `add`. Accepts two or more inputs.
```
add 1 2 3
```
### add1
To increment by 1, use `add1`.
### sub
To subtract, use `sub`.
### div
Perform integer division with `div`.
### mod
Modulo with `mod`.
### mul
Multiply with `mul`. Accepts two or more inputs.
```
mul 1 2 3
```
### max
Return the largest of a series of integers.
This will return `3`:
```
max 1 2 3
```
### min
Return the smallest of a series of integers.
`min 1 2 3` will return `1`.
### len
Returns the length of the argument as an integer.
```
len .Arg
```
## Float Math Functions
All math functions operate on `float64` values.
### addf
Sum numbers with `addf`
This will return `5.5`:
```
addf 1.5 2 2
```
### add1f
To increment by 1, use `add1f`
### subf
To subtract, use `subf`
This is equivalent to `7.5 - 2 - 3` and will return `2.5`:
```
subf 7.5 2 3
```
### divf
Perform integer division with `divf`
This is equivalent to `10 / 2 / 4` and will return `1.25`:
```
divf 10 2 4
```
### mulf
Multiply with `mulf`
This will return `6`:
```
mulf 1.5 2 2
```
### maxf
Return the largest of a series of floats:
This will return `3`:
```
maxf 1 2.5 3
```
### minf
Return the smallest of a series of floats.
This will return `1.5`:
```
minf 1.5 2 3
```
### floor
Returns the greatest float value less than or equal to input value.
`floor 123.9999` will return `123.0`.
### ceil
Returns the greatest float value greater than or equal to input value.
`ceil 123.001` will return `124.0`.
### round
Returns a float value with the remainder rounded to the given number to digits
after the decimal point.
`round 123.555555 3` will return `123.556`.
## Network Functions
Helm has a single network function, `getHostByName`.
The `getHostByName` receives a domain name and returns the ip address.
`getHostByName "www.google.com"` would return the corresponding ip address of `www.google.com`.
## File Path Functions
While Helm template functions do not grant access to the filesystem, they do
provide functions for working with strings that follow file path conventions.
Those include [base](#base), [clean](#clean), [dir](#dir), [ext](#ext), and
[isAbs](#isabs).
### base
Return the last element of a path.
```
base "foo/bar/baz"
```
The above prints "baz".
### dir
Return the directory, stripping the last part of the path. So `dir
"foo/bar/baz"` returns `foo/bar`.
### clean
Clean up a path.
```
clean "foo/bar/../baz"
```
The above resolves the `..` and returns `foo/baz`.
### ext
Return the file extension.
```
ext "foo.bar"
```
The above returns `.bar`.
### isAbs
To check whether a file path is absolute, use `isAbs`.
## Reflection Functions
Helm provides rudimentary reflection tools. These help advanced template
developers understand the underlying Go type information for a particular value.
Helm is written in Go and is strongly typed. The type system applies within
templates.
Go has several primitive _kinds_, like `string`, `slice`, `int64`, and `bool`.
Go has an open _type_ system that allows developers to create their own types.
Helm provides a set of functions for each via [kind functions](#kind-functions)
and [type functions](#type-functions). A [deepEqual](#deepequal) function is
also provided to compare to values.
### Kind Functions
There are two Kind functions: `kindOf` returns the kind of an object.
```
kindOf "hello"
```
The above would return `string`. For simple tests (like in `if` blocks), the
`kindIs` function will let you verify that a value is a particular kind:
```
kindIs "int" 123
```
The above will return `true`.
### Type Functions
Types are slightly harder to work with, so there are three different functions:
- `typeOf` returns the underlying type of a value: `typeOf $foo`
- `typeIs` is like `kindIs`, but for types: `typeIs "*io.Buffer" $myVal`
- `typeIsLike` works as `typeIs`, except that it also dereferences pointers
**Note:** None of these can test whether or not something implements a given
interface, since doing so would require compiling the interface in ahead of
time.
### deepEqual
`deepEqual` returns true if two values are ["deeply
equal"](https://golang.org/pkg/reflect/#DeepEqual)
Works for non-primitive types as well (compared to the built-in `eq`).
```
deepEqual (list 1 2 3) (list 1 2 3)
```
The above will return `true`.
## Semantic Version Functions
Some version schemes are easily parseable and comparable. Helm provides
functions for working with [SemVer 2](http://semver.org) versions. These include
[semver](#semver) and [semverCompare](#semvercompare). Below you will also find
details on using ranges for comparisons.
### semver
The `semver` function parses a string into a Semantic Version:
```
$version := semver "1.2.3-alpha.1+123"
```
_If the parser fails, it will cause template execution to halt with an error._
At this point, `$version` is a pointer to a `Version` object with the following
properties:
- `$version.Major`: The major number (`1` above)
- `$version.Minor`: The minor number (`2` above)
- `$version.Patch`: The patch number (`3` above)
- `$version.Prerelease`: The prerelease (`alpha.1` above)
- `$version.Metadata`: The build metadata (`123` above)
- `$version.Original`: The original version as a string
Additionally, you can compare a `Version` to another `version` using the
`Compare` function:
```
semver "1.4.3" | (semver "1.2.3").Compare
```
The above will return `-1`.
The return values are:
- `-1` if the given semver is greater than the semver whose `Compare` method was
called
- `1` if the version who's `Compare` function was called is greater.
- `0` if they are the same version
(Note that in SemVer, the `Metadata` field is not compared during version
comparison operations.)
### semverCompare
A more robust comparison function is provided as `semverCompare`. This version
supports version ranges:
- `semverCompare "1.2.3" "1.2.3"` checks for an exact match
- `semverCompare "~1.2.0" "1.2.3"` checks that the major and minor versions
match, and that the patch number of the second version is _greater than or
equal to_ the first parameter.
The SemVer functions use the [Masterminds semver
library](https://github.com/Masterminds/semver), from the creators of Sprig.
### Basic Comparisons
There are two elements to the comparisons. First, a comparison string is a list
of space or comma separated AND comparisons. These are then separated by || (OR)
comparisons. For example, `">= 1.2 < 3.0.0 || >= 4.2.3"` is looking for a
comparison that's greater than or equal to 1.2 and less than 3.0.0 or is greater
than or equal to 4.2.3.
The basic comparisons are:
- `=`: equal (aliased to no operator)
- `!=`: not equal
- `>`: greater than
- `<`: less than
- `>=`: greater than or equal to
- `<=`: less than or equal to
### Working With Prerelease Versions
Pre-releases, for those not familiar with them, are used for software releases
prior to stable or generally available releases. Examples of prereleases include
development, alpha, beta, and release candidate releases. A prerelease may be a
version such as `1.2.3-beta.1`, while the stable release would be `1.2.3`. In the
order of precedence, prereleases come before their associated releases. In this
example `1.2.3-beta.1 < 1.2.3`.
According to the Semantic Version specification prereleases may not be API
compliant with their release counterpart. It says,
> A pre-release version indicates that the version is unstable and might not
> satisfy the intended compatibility requirements as denoted by its associated
> normal version.
SemVer comparisons using constraints without a prerelease comparator will skip
prerelease versions. For example, `>=1.2.3` will skip prereleases when looking
at a list of releases, while `>=1.2.3-0` will evaluate and find prereleases.
The reason for the `0` as a pre-release version in the example comparison is
because pre-releases can only contain ASCII alphanumerics and hyphens (along
with `.` separators), per the spec. Sorting happens in ASCII sort order, again
per the spec. The lowest character is a `0` in ASCII sort order (see an [ASCII
Table](http://www.asciitable.com/))
Understanding ASCII sort ordering is important because A-Z comes before a-z.
That means `>=1.2.3-BETA` will return `1.2.3-alpha`. What you might expect from
case sensitivity doesn't apply here. This is due to ASCII sort ordering which is
what the spec specifies.
### Hyphen Range Comparisons
There are multiple methods to handle ranges and the first is hyphens ranges.
These look like:
- `1.2 - 1.4.5` which is equivalent to `>= 1.2 <= 1.4.5`
- `2.3.4 - 4.5` which is equivalent to `>= 2.3.4 <= 4.5`
### Wildcards In Comparisons
The `x`, `X`, and `*` characters can be used as a wildcard character. This works
for all comparison operators. When used on the `=` operator it falls back to the
patch level comparison (see tilde below). For example,
- `1.2.x` is equivalent to `>= 1.2.0, < 1.3.0`
- `>= 1.2.x` is equivalent to `>= 1.2.0`
- `<= 2.x` is equivalent to `< 3`
- `*` is equivalent to `>= 0.0.0`
### Tilde Range Comparisons (Patch)
The tilde (`~`) comparison operator is for patch level ranges when a minor
version is specified and major level changes when the minor number is missing.
For example,
- `~1.2.3` is equivalent to `>= 1.2.3, < 1.3.0`
- `~1` is equivalent to `>= 1, < 2`
- `~2.3` is equivalent to `>= 2.3, < 2.4`
- `~1.2.x` is equivalent to `>= 1.2.0, < 1.3.0`
- `~1.x` is equivalent to `>= 1, < 2`
### Caret Range Comparisons (Major)
The caret (`^`) comparison operator is for major level changes once a stable
(1.0.0) release has occurred. Prior to a 1.0.0 release the minor versions acts
as the API stability level. This is useful when comparisons of API versions as a
major change is API breaking. For example,
- `^1.2.3` is equivalent to `>= 1.2.3, < 2.0.0`
- `^1.2.x` is equivalent to `>= 1.2.0, < 2.0.0`
- `^2.3` is equivalent to `>= 2.3, < 3`
- `^2.x` is equivalent to `>= 2.0.0, < 3`
- `^0.2.3` is equivalent to `>=0.2.3 <0.3.0`
- `^0.2` is equivalent to `>=0.2.0 <0.3.0`
- `^0.0.3` is equivalent to `>=0.0.3 <0.0.4`
- `^0.0` is equivalent to `>=0.0.0 <0.1.0`
- `^0` is equivalent to `>=0.0.0 <1.0.0`
## URL Functions
Helm includes the [urlParse](#urlparse), [urlJoin](#urljoin), and
[urlquery](#urlquery) functions enabling you to work with URL parts.
### urlParse
Parses string for URL and produces dict with URL parts
```
urlParse "http://admin:[email protected]:8080/api?list=false#anchor"
```
The above returns a dict, containing URL object:
```yaml
scheme: 'http'
host: 'server.com:8080'
path: '/api'
query: 'list=false'
opaque: nil
fragment: 'anchor'
userinfo: 'admin:secret'
```
This is implemented used the URL packages from the Go standard library. For more
info, check https://golang.org/pkg/net/url/#URL
### urlJoin
Joins map (produced by `urlParse`) to produce URL string
```
urlJoin (dict "fragment" "fragment" "host" "host:80" "path" "/path" "query" "query" "scheme" "http")
```
The above returns the following string:
```
http://host:80/path?query#fragment
```
### urlquery
Returns the escaped version of the value passed in as an argument so that it is
suitable for embedding in the query portion of a URL.
```
$var := urlquery "string for query"
```
## UUID Functions
Helm can generate UUID v4 universally unique IDs.
```
uuidv4
```
The above returns a new UUID of the v4 (randomly generated) type.
## Kubernetes and Chart Functions
Helm includes functions for working with Kubernetes including
[.Capabilities.APIVersions.Has](#capabilitiesapiversionshas),
[Files](#file-functions), and [lookup](#lookup).
### lookup
`lookup` is used to look up resource in a running cluster. When used with the
`helm template` command it always returns an empty response.
You can find more detail in the [documentation on the lookup
function](functions_and_pipelines.md/#using-the-lookup-function).
### .Capabilities.APIVersions.Has
Returns if an API version or resource is available in a cluster.
```
.Capabilities.APIVersions.Has "apps/v1"
.Capabilities.APIVersions.Has "apps/v1/Deployment"
```
More information is available on the [built-in object
documentation](builtin_objects.md).
### File Functions
There are several functions that enable you to get to non-special files within a
chart. For example, to access application configuration files. These are
documented in [Accessing Files Inside Templates](accessing_files.md).
_Note, the documentation for many of these functions come from
[Sprig](https://github.com/Masterminds/sprig). Sprig is a template function
library available to Go applications._ | helm | title Template Function List description A list of template functions available in Helm weight 6 Helm includes many template functions you can take advantage of in templates They are listed here and broken down by the following categories Cryptographic and Security cryptographic and security functions Date date functions Dictionaries dictionaries and dict functions Encoding encoding functions File Path file path functions Kubernetes and Chart kubernetes and chart functions Logic and Flow Control logic and flow control functions Lists lists and list functions Math math functions Float Math float math functions Network network functions Reflection reflection functions Regular Expressions regular expressions Semantic Versions semantic version functions String string functions Type Conversion type conversion functions URL url functions UUID uuid functions Logic and Flow Control Functions Helm includes numerous logic and control flow functions including and and coalesce coalesce default default empty empty eq eq fail fail ge ge gt gt le le lt lt ne ne not not or or and required required and Returns the boolean AND of two or more arguments the first empty argument or the last argument and Arg1 Arg2 or Returns the boolean OR of two or more arguments the first non empty argument or the last argument or Arg1 Arg2 not Returns the boolean negation of its argument not Arg eq Returns the boolean equality of the arguments e g Arg1 Arg2 eq Arg1 Arg2 ne Returns the boolean inequality of the arguments e g Arg1 Arg2 ne Arg1 Arg2 lt Returns a boolean true if the first argument is less than the second False is returned otherwise e g Arg1 Arg2 lt Arg1 Arg2 le Returns a boolean true if the first argument is less than or equal to the second False is returned otherwise e g Arg1 Arg2 le Arg1 Arg2 gt Returns a boolean true if the first argument is greater than the second False is returned otherwise e g Arg1 Arg2 gt Arg1 Arg2 ge Returns a boolean true if the first argument is greater than or equal to the second False is returned otherwise e g Arg1 Arg2 ge Arg1 Arg2 default To set a simple default value use default default foo Bar In the above if Bar evaluates to a non empty value it will be used But if it is empty foo will be returned instead The definition of empty depends on type Numeric 0 String Lists Dicts Boolean false And always nil aka null For structs there is no definition of empty so a struct will never return the default required Specify values that must be set with required required A valid foo is required Bar If Bar is empty or not defined see default default on how this is evaluated the template will not render and will return the error message supplied instead empty The empty function returns true if the given value is considered empty and false otherwise The empty values are listed in the default section empty Foo Note that in Go template conditionals emptiness is calculated for you Thus you rarely need if not empty Foo Instead just use if Foo fail Unconditionally returns an empty string and an error with the specified text This is useful in scenarios where other conditionals have determined that template rendering should fail fail Please accept the end user license agreement coalesce The coalesce function takes a list of values and returns the first non empty one coalesce 0 1 2 The above returns 1 This function is useful for scanning through multiple variables or values coalesce name parent name Matt The above will first check to see if name is empty If it is not it will return that value If it is empty coalesce will evaluate parent name for emptiness Finally if both name and parent name are empty it will return Matt ternary The ternary function takes two values and a test value If the test value is true the first value will be returned If the test value is empty the second value will be returned This is similar to the ternary operator in C and other programming languages true test value ternary foo bar true or true ternary foo bar The above returns foo false test value ternary foo bar false or false ternary foo bar The above returns bar String Functions Helm includes the following string functions abbrev abbrev abbrevboth abbrevboth camelcase camelcase cat cat contains contains hasPrefix hasprefix and hassuffix hasSuffix hasprefix and hassuffix indent indent initials initials kebabcase kebabcase lower lower nindent nindent nospace nospace plural plural print print printf printf println println quote quote and squote randAlpha randalphanum randalpha randnumeric and randascii randAlphaNum randalphanum randalpha randnumeric and randascii randAscii randalphanum randalpha randnumeric and randascii randNumeric randalphanum randalpha randnumeric and randascii repeat repeat replace replace shuffle shuffle snakecase snakecase squote quote and squote substr substr swapcase swapcase title title trim trim trimAll trimall trimPrefix trimprefix trimSuffix trimsuffix trunc trunc untitle untitle upper upper wrap wrap and wrapWith wrapwith print Returns a string from the combination of its parts print Matt has Dogs dogs Types that are not strings are converted to strings where possible Note when two arguments next to each other are not strings a space is added between them println Works the same way as print print but adds a new line at the end printf Returns a string based on a formatting string and the arguments to pass to it in order printf s has d dogs Name NumberDogs The placeholder to use depends on the type for the argument being passed in This includes General purpose v the value in a default format when printing dicts the plus flag v adds field names a literal percent sign consumes no value Boolean t the word true or false Integer b base 2 c the character represented by the corresponding Unicode code point d base 10 o base 8 O base 8 with 0o prefix q a single quoted character literal safely escaped x base 16 with lower case letters for a f X base 16 with upper case letters for A F U Unicode format U 1234 same as U 04X Floating point and complex constituents b decimal less scientific notation with exponent a power of two e g 123456p 78 e scientific notation e g 1 234456e 78 E scientific notation e g 1 234456E 78 f decimal point but no exponent e g 123 456 F synonym for f g e for large exponents f otherwise G E for large exponents F otherwise x hexadecimal notation with decimal power of two exponent e g 0x1 23abcp 20 X upper case hexadecimal notation e g 0X1 23ABCP 20 String and slice of bytes treated equivalently with these verbs s the uninterpreted bytes of the string or slice q a double quoted string safely escaped x base 16 lower case two characters per byte X base 16 upper case two characters per byte Slice p address of 0th element in base 16 notation with leading 0x trim The trim function removes white space from both sides of a string trim hello The above produces hello trimAll Removes the given characters from the front and back of a string trimAll 5 00 The above returns 5 00 as a string trimPrefix Trim just the prefix from a string trimPrefix hello The above returns hello trimSuffix Trim just the suffix from a string trimSuffix hello The above returns hello lower Convert the entire string to lowercase lower HELLO The above returns hello upper Convert the entire string to uppercase upper hello The above returns HELLO title Convert to title case title hello world The above returns Hello World untitle Remove title casing untitle Hello World produces hello world repeat Repeat a string multiple times repeat 3 hello The above returns hellohellohello substr Get a substring from a string It takes three parameters start int end int string string substr 0 5 hello world The above returns hello nospace Remove all whitespace from a string nospace hello w o r l d The above returns helloworld trunc Truncate a string trunc 5 hello world The above produces hello trunc 5 hello world The above produces world abbrev Truncate a string with ellipses Parameters max length the string abbrev 5 hello world The above returns he since it counts the width of the ellipses against the maximum length abbrevboth Abbreviate both sides abbrevboth 5 10 1234 5678 9123 the above produces 5678 It takes left offset max length the string initials Given multiple words take the first letter of each word and combine initials First Try The above returns FT randAlphaNum randAlpha randNumeric and randAscii These four functions generate cryptographically secure uses crypto rand random strings but with different base character sets randAlphaNum uses 0 9a zA Z randAlpha uses a zA Z randNumeric uses 0 9 randAscii uses all printable ASCII characters Each of them takes one parameter the integer length of the string randNumeric 3 The above will produce a random string with three digits wrap Wrap text at a given column count wrap 80 someText The above will wrap the string in someText at 80 columns wrapWith wrapWith works as wrap but lets you specify the string to wrap with wrap uses n wrapWith 5 t Hello World The above produces Hello World where the whitespace is an ASCII tab character contains Test to see if one string is contained inside of another contains cat catch The above returns true because catch contains cat hasPrefix and hasSuffix The hasPrefix and hasSuffix functions test whether a string has a given prefix or suffix hasPrefix cat catch The above returns true because catch has the prefix cat quote and squote These functions wrap a string in double quotes quote or single quotes squote cat The cat function concatenates multiple strings together into one separating them with spaces cat hello beautiful world The above produces hello beautiful world indent The indent function indents every line in a given string to the specified indent width This is useful when aligning multi line strings indent 4 lots of text The above will indent every line of text by 4 space characters nindent The nindent function is the same as the indent function but prepends a new line to the beginning of the string nindent 4 lots of text The above will indent every line of text by 4 space characters and add a new line to the beginning replace Perform simple string replacement It takes three arguments string to replace string to replace with source string I Am Henry VIII replace The above will produce I Am Henry VIII plural Pluralize a string len fish plural one anchovy many anchovies In the above if the length of the string is 1 the first argument will be printed one anchovy Otherwise the second argument will be printed many anchovies The arguments are singular string plural string length integer NOTE Helm does not currently support languages with more complex pluralization rules And 0 is considered a plural because the English language treats it as such zero anchovies snakecase Convert string from camelCase to snake case snakecase FirstName This above will produce first name camelcase Convert string from snake case to CamelCase camelcase http server This above will produce HttpServer kebabcase Convert string from camelCase to kebab case kebabcase FirstName This above will produce first name swapcase Swap the case of a string using a word based algorithm Conversion algorithm Upper case character converts to Lower case Title case character converts to Lower case Lower case character after Whitespace or at start converts to Title case Other Lower case character converts to Upper case Whitespace is defined by unicode IsSpace char swapcase This Is A Test This above will produce tHIS iS a tEST shuffle Shuffle a string shuffle hello The above will randomize the letters in hello perhaps producing oelhl Type Conversion Functions The following type conversion functions are provided by Helm atoi Convert a string to an integer float64 Convert to a float64 int Convert to an int at the system s width int64 Convert to an int64 toDecimal Convert a unix octal to a int64 toString Convert to a string toStrings Convert a list slice or array to a list of strings toJson mustToJson Convert list slice array dict or object to JSON toPrettyJson mustToPrettyJson Convert list slice array dict or object to indented JSON toRawJson mustToRawJson Convert list slice array dict or object to JSON with HTML characters unescaped fromYaml Convert a YAML string to an object fromJson Convert a JSON string to an object fromJsonArray Convert a JSON array to a list toYaml Convert list slice array dict or object to indented yaml can be used to copy chunks of yaml from any source This function is equivalent to GoLang yaml Marshal function see docs here https pkg go dev gopkg in yaml v2 Marshal toToml Convert list slice array dict or object to toml can be used to copy chunks of toml from any source fromYamlArray Convert a YAML array to a list Only atoi requires that the input be a specific type The others will attempt to convert from any type to the destination type For example int64 can convert floats to ints and it can also convert strings to ints toStrings Given a list like collection produce a slice of strings list 1 2 3 toStrings The above converts 1 to 1 2 to 2 and so on and then returns them as a list toDecimal Given a unix octal permission produce a decimal 0777 toDecimal The above converts 0777 to 511 and returns the value as an int64 toJson mustToJson The toJson function encodes an item into a JSON string If the item cannot be converted to JSON the function will return an empty string mustToJson will return an error in case the item cannot be encoded in JSON toJson Item The above returns JSON string representation of Item toPrettyJson mustToPrettyJson The toPrettyJson function encodes an item into a pretty indented JSON string toPrettyJson Item The above returns indented JSON string representation of Item toRawJson mustToRawJson The toRawJson function encodes an item into JSON string with HTML characters unescaped toRawJson Item The above returns unescaped JSON string representation of Item fromYaml The fromYaml function takes a YAML string and returns an object that can be used in templates File at yamls person yaml yaml name Bob age 25 hobbies hiking fishing cooking yaml greeting Hi my name is and I am years old My hobbies are fromJson The fromJson function takes a JSON string and returns an object that can be used in templates File at jsons person json json name Bob age 25 hobbies hiking fishing cooking yaml greeting Hi my name is and I am years old My hobbies are fromJsonArray The fromJsonArray function takes a JSON Array and returns a list that can be used in templates File at jsons people json json name Bob age 25 name Ram age 16 yaml greeting Hi my name is and I am years old fromYamlArray The fromYamlArray function takes a YAML Array and returns a list that can be used in templates File at yamls people yml yaml name Bob age 25 name Ram age 16 yaml greeting Hi my name is and I am years old Regular Expressions Helm includes the following regular expression functions regexFind mustRegexFind regexfindall mustregexfindall regexFindAll mustRegexFindAll regexfind mustregexfind regexMatch mustRegexMatch regexmatch mustregexmatch regexReplaceAll mustRegexReplaceAll regexreplaceall mustregexreplaceall regexReplaceAllLiteral mustRegexReplaceAllLiteral regexreplaceallliteral mustregexreplaceallliteral regexSplit mustRegexSplit regexsplit mustregexsplit regexMatch mustRegexMatch Returns true if the input string contains any match of the regular expression regexMatch A Za z0 9 A Za z0 9 A Za z 2 test acme com The above produces true regexMatch panics if there is a problem and mustRegexMatch returns an error to the template engine if there is a problem regexFindAll mustRegexFindAll Returns a slice of all matches of the regular expression in the input string The last parameter n determines the number of substrings to return where 1 means return all matches regexFindAll 2 4 6 8 123456789 1 The above produces 2 4 6 8 regexFindAll panics if there is a problem and mustRegexFindAll returns an error to the template engine if there is a problem regexFind mustRegexFind Return the first left most match of the regular expression in the input string regexFind a zA Z 1 9 abcd1234 The above produces d1 regexFind panics if there is a problem and mustRegexFind returns an error to the template engine if there is a problem regexReplaceAll mustRegexReplaceAll Returns a copy of the input string replacing matches of the Regexp with the replacement string replacement Inside string replacement signs are interpreted as in Expand so for instance 1 represents the text of the first submatch regexReplaceAll a x b ab axxb 1 W The above produces W xxW regexReplaceAll panics if there is a problem and mustRegexReplaceAll returns an error to the template engine if there is a problem regexReplaceAllLiteral mustRegexReplaceAllLiteral Returns a copy of the input string replacing matches of the Regexp with the replacement string replacement The replacement string is substituted directly without using Expand regexReplaceAllLiteral a x b ab axxb 1 The above produces 1 1 regexReplaceAllLiteral panics if there is a problem and mustRegexReplaceAllLiteral returns an error to the template engine if there is a problem regexSplit mustRegexSplit Slices the input string into substrings separated by the expression and returns a slice of the substrings between those expression matches The last parameter n determines the number of substrings to return where 1 means return all matches regexSplit z pizza 1 The above produces pi a regexSplit panics if there is a problem and mustRegexSplit returns an error to the template engine if there is a problem Cryptographic and Security Functions Helm provides some advanced cryptographic functions They include adler32sum adler32sum buildCustomCert buildcustomcert decryptAES decryptaes derivePassword derivepassword encryptAES encryptaes genCA genca genPrivateKey genprivatekey genSelfSignedCert genselfsignedcert genSignedCert gensignedcert htpasswd htpasswd sha1sum sha1sum and sha256sum sha256sum sha1sum The sha1sum function receives a string and computes it s SHA1 digest sha1sum Hello world sha256sum The sha256sum function receives a string and computes it s SHA256 digest sha256sum Hello world The above will compute the SHA 256 sum in an ASCII armored format that is safe to print adler32sum The adler32sum function receives a string and computes its Adler 32 checksum adler32sum Hello world htpasswd The htpasswd function takes a username and password and generates a bcrypt hash of the password The result can be used for basic authentication on an Apache HTTP Server https httpd apache org docs 2 4 misc password encryptions html basic htpasswd myUser myPassword Note that it is insecure to store the password directly in the template derivePassword The derivePassword function can be used to derive a specific password based on some shared master password constraints The algorithm for this is well specified https web archive org web 20211019121301 https masterpassword app masterpassword algorithm pdf derivePassword 1 long password user example com Note that it is considered insecure to store the parts directly in the template genPrivateKey The genPrivateKey function generates a new private key encoded into a PEM block It takes one of the values for its first param ecdsa Generate an elliptic curve DSA key P256 dsa Generate a DSA key L2048N256 rsa Generate an RSA 4096 key buildCustomCert The buildCustomCert function allows customizing the certificate It takes the following string parameters A base64 encoded PEM format certificate A base64 encoded PEM format private key It returns a certificate object with the following attributes Cert A PEM encoded certificate Key A PEM encoded private key Example ca buildCustomCert base64 encoded ca crt base64 encoded ca key Note that the returned object can be passed to the genSignedCert function to sign a certificate using this CA genCA The genCA function generates a new self signed x509 certificate authority It takes the following parameters Subject s common name cn Cert validity duration in days It returns an object with the following attributes Cert A PEM encoded certificate Key A PEM encoded private key Example ca genCA foo ca 365 Note that the returned object can be passed to the genSignedCert function to sign a certificate using this CA genSelfSignedCert The genSelfSignedCert function generates a new self signed x509 certificate It takes the following parameters Subject s common name cn Optional list of IPs may be nil Optional list of alternate DNS names may be nil Cert validity duration in days It returns an object with the following attributes Cert A PEM encoded certificate Key A PEM encoded private key Example cert genSelfSignedCert foo com list 10 0 0 1 10 0 0 2 list bar com bat com 365 genSignedCert The genSignedCert function generates a new x509 certificate signed by the specified CA It takes the following parameters Subject s common name cn Optional list of IPs may be nil Optional list of alternate DNS names may be nil Cert validity duration in days CA see genCA Example ca genCA foo ca 365 cert genSignedCert foo com list 10 0 0 1 10 0 0 2 list bar com bat com 365 ca encryptAES The encryptAES function encrypts text with AES 256 CBC and returns a base64 encoded string encryptAES secretkey plaintext decryptAES The decryptAES function receives a base64 string encoded by the AES 256 CBC algorithm and returns the decoded text 30tEfhuJSVRhpG97XCuWgz2okj7L8vQ1s6V9zVUPeDQ decryptAES secretkey Date Functions Helm includes the following date functions you can use in templates ago ago date date dateInZone dateinzone dateModify mustDateModify datemodify mustdatemodify duration duration durationRound durationround htmlDate htmldate htmlDateInZone htmldateinzone now now toDate mustToDate todate musttodate and unixEpoch unixepoch now The current date time Use this in conjunction with other date functions ago The ago function returns duration from time Now in seconds resolution ago CreatedAt returns in time Duration String format 2h34m7s date The date function formats a date Format the date to YEAR MONTH DAY now date 2006 01 02 Date formatting in Go is a little bit different https pauladamsmith com blog 2011 05 go time html In short take this as the base date Mon Jan 2 15 04 05 MST 2006 Write it in the format you want Above 2006 01 02 is the same date but in the format we want dateInZone Same as date but with a timezone dateInZone 2006 01 02 now UTC duration Formats a given amount of seconds as a time Duration This returns 1m35s duration 95 durationRound Rounds a given duration to the most significant unit Strings and time Duration gets parsed as a duration while a time Time is calculated as the duration since This return 2h durationRound 2h10m5s This returns 3mo durationRound 2400h10m5s unixEpoch Returns the seconds since the unix epoch for a time Time now unixEpoch dateModify mustDateModify The dateModify takes a modification and a date and returns the timestamp Subtract an hour and thirty minutes from the current time now dateModify 1 5h If the modification format is wrong dateModify will return the date unmodified mustDateModify will return an error otherwise htmlDate The htmlDate function formats a date for inserting into an HTML date picker input field now htmlDate htmlDateInZone Same as htmlDate but with a timezone htmlDateInZone now UTC toDate mustToDate toDate converts a string to a date The first argument is the date layout and the second the date string If the string can t be convert it returns the zero value mustToDate will return an error in case the string cannot be converted This is useful when you want to convert a string date to another format using pipe The example below converts 2017 12 31 to 31 12 2017 toDate 2006 01 02 2017 12 31 date 02 01 2006 Dictionaries and Dict Functions Helm provides a key value storage type called a dict short for dictionary as in Python A dict is an unordered type The key to a dictionary must be a string However the value can be any type even another dict or list Unlike list s dict s are not immutable The set and unset functions will modify the contents of a dictionary Helm provides the following functions to support working with dicts deepCopy mustDeepCopy deepcopy mustdeepcopy dict dict dig dig get get hasKey haskey keys keys merge mustMerge merge mustmerge mergeOverwrite mustMergeOverwrite mergeoverwrite mustmergeoverwrite omit omit pick pick pluck pluck set set unset unset and values values dict Creating dictionaries is done by calling the dict function and passing it a list of pairs The following creates a dictionary with three items myDict dict name1 value1 name2 value2 name3 value 3 get Given a map and a key get the value from the map get myDict name1 The above returns value1 Note that if the key is not found this operation will simply return No error will be generated set Use set to add a new key value pair to a dictionary set myDict name4 value4 Note that set returns the dictionary a requirement of Go template functions so you may need to trap the value as done above with the assignment unset Given a map and a key delete the key from the map unset myDict name4 As with set this returns the dictionary Note that if the key is not found this operation will simply return No error will be generated hasKey The hasKey function returns true if the given dict contains the given key hasKey myDict name1 If the key is not found this returns false pluck The pluck function makes it possible to give one key and multiple maps and get a list of all of the matches pluck name1 myDict myOtherDict The above will return a list containing every found value value1 otherValue1 If the given key is not found in a map that map will not have an item in the list and the length of the returned list will be less than the number of dicts in the call to pluck If the key is found but the value is an empty value that value will be inserted A common idiom in Helm templates is to use pluck first to get the first matching key out of a collection of dictionaries dig The dig function traverses a nested set of dicts selecting keys from a list of values It returns a default value if any of the keys are not found at the associated dict dig user role humanName guest dict Given a dict structured like user role humanName curator the above would return curator If the dict lacked even a user field the result would be guest Dig can be very useful in cases where you d like to avoid guard clauses especially since Go s template package s and doesn t shortcut For instance and a maybeNil a maybeNil iNeedThis will always evaluate a maybeNil iNeedThis and panic if a lacks a maybeNil field dig accepts its dict argument last in order to support pipelining For instance merge a b c dig one two three missing merge mustMerge Merge two or more dictionaries into one giving precedence to the dest dictionary Given dst default default overwrite me key true src overwrite overwritten key false will result in newdict default default overwrite me key true newdict merge dest source1 source2 This is a deep merge operation but not a deep copy operation Nested objects that are merged are the same instance on both dicts If you want a deep copy along with the merge then use the deepCopy function along with merging For example deepCopy source merge dest mustMerge will return an error in case of unsuccessful merge mergeOverwrite mustMergeOverwrite Merge two or more dictionaries into one giving precedence from right to left effectively overwriting values in the dest dictionary Given dst default default overwrite me key true src overwrite overwritten key false will result in newdict default default overwrite overwritten key false newdict mergeOverwrite dest source1 source2 This is a deep merge operation but not a deep copy operation Nested objects that are merged are the same instance on both dicts If you want a deep copy along with the merge then use the deepCopy function along with merging For example deepCopy source mergeOverwrite dest mustMergeOverwrite will return an error in case of unsuccessful merge keys The keys function will return a list of all of the keys in one or more dict types Since a dictionary is unordered the keys will not be in a predictable order They can be sorted with sortAlpha keys myDict sortAlpha When supplying multiple dictionaries the keys will be concatenated Use the uniq function along with sortAlpha to get a unique sorted list of keys keys myDict myOtherDict uniq sortAlpha pick The pick function selects just the given keys out of a dictionary creating a new dict new pick myDict name1 name2 The above returns name1 value1 name2 value2 omit The omit function is similar to pick except it returns a new dict with all the keys that do not match the given keys new omit myDict name1 name3 The above returns name2 value2 values The values function is similar to keys except it returns a new list with all the values of the source dict only one dictionary is supported vals values myDict The above returns list value1 value2 value 3 Note that the values function gives no guarantees about the result ordering if you care about this then use sortAlpha deepCopy mustDeepCopy The deepCopy and mustDeepCopy functions take a value and make a deep copy of the value This includes dicts and other structures deepCopy panics when there is a problem while mustDeepCopy returns an error to the template system when there is an error dict a 1 b 2 deepCopy A Note on Dict Internals A dict is implemented in Go as a map string interface Go developers can pass map string interface values into the context to make them available to templates as dict s Encoding Functions Helm has the following encoding and decoding functions b64enc b64dec Encode or decode with Base64 b32enc b32dec Encode or decode with Base32 Lists and List Functions Helm provides a simple list type that can contain arbitrary sequential lists of data This is similar to arrays or slices but lists are designed to be used as immutable data types Create a list of integers myList list 1 2 3 4 5 The above creates a list of 1 2 3 4 5 Helm provides the following list functions append mustAppend append mustappend compact mustCompact compact mustcompact concat concat first mustFirst first mustfirst has mustHas has musthas initial mustInitial initial mustinitial last mustLast last mustlast prepend mustPrepend prepend mustprepend rest mustRest rest mustrest reverse mustReverse reverse mustreverse seq seq index index slice mustSlice slice mustslice uniq mustUniq uniq mustuniq until until untilStep untilstep and without mustWithout without mustwithout first mustFirst To get the head item on a list use first first myList returns 1 first panics if there is a problem while mustFirst returns an error to the template engine if there is a problem rest mustRest To get the tail of the list everything but the first item use rest rest myList returns 2 3 4 5 rest panics if there is a problem while mustRest returns an error to the template engine if there is a problem last mustLast To get the last item on a list use last last myList returns 5 This is roughly analogous to reversing a list and then calling first initial mustInitial This compliments last by returning all but the last element initial myList returns 1 2 3 4 initial panics if there is a problem while mustInitial returns an error to the template engine if there is a problem append mustAppend Append a new item to an existing list creating a new list new append myList 6 The above would set new to 1 2 3 4 5 6 myList would remain unaltered append panics if there is a problem while mustAppend returns an error to the template engine if there is a problem prepend mustPrepend Push an element onto the front of a list creating a new list prepend myList 0 The above would produce 0 1 2 3 4 5 myList would remain unaltered prepend panics if there is a problem while mustPrepend returns an error to the template engine if there is a problem concat Concatenate arbitrary number of lists into one concat myList list 6 7 list 8 The above would produce 1 2 3 4 5 6 7 8 myList would remain unaltered reverse mustReverse Produce a new list with the reversed elements of the given list reverse myList The above would generate the list 5 4 3 2 1 reverse panics if there is a problem while mustReverse returns an error to the template engine if there is a problem uniq mustUniq Generate a list with all of the duplicates removed list 1 1 1 2 uniq The above would produce 1 2 uniq panics if there is a problem while mustUniq returns an error to the template engine if there is a problem without mustWithout The without function filters items out of a list without myList 3 The above would produce 1 2 4 5 without can take more than one filter without myList 1 3 5 That would produce 2 4 without panics if there is a problem while mustWithout returns an error to the template engine if there is a problem has mustHas Test to see if a list has a particular element has 4 myList The above would return true while has hello myList would return false has panics if there is a problem while mustHas returns an error to the template engine if there is a problem compact mustCompact Accepts a list and removes entries with empty values list list 1 a foo copy compact list compact will return a new list with the empty i e item removed compact panics if there is a problem and mustCompact returns an error to the template engine if there is a problem index To get the nth element of a list use index list n To index into multi dimensional lists use index list n m index myList 0 returns 1 It is the same as myList 0 index myList 0 1 would be the same as myList 0 1 slice mustSlice To get partial elements of a list use slice list n m It is equivalent of list n m slice myList returns 1 2 3 4 5 It is same as myList slice myList 3 returns 4 5 It is same as myList 3 slice myList 1 3 returns 2 3 It is same as myList 1 3 slice myList 0 3 returns 1 2 3 It is same as myList 3 slice panics if there is a problem while mustSlice returns an error to the template engine if there is a problem until The until function builds a range of integers until 5 The above generates the list 0 1 2 3 4 This is useful for looping with range i e until 5 untilStep Like until untilStep generates a list of counting integers But it allows you to define a start stop and step untilStep 3 6 2 The above will produce 3 5 by starting with 3 and adding 2 until it is equal or greater than 6 This is similar to Python s range function seq Works like the bash seq command 1 parameter end will generate all counting integers between 1 and end inclusive 2 parameters start end will generate all counting integers between start and end inclusive incrementing or decrementing by 1 3 parameters start step end will generate all counting integers between start and end inclusive incrementing or decrementing by step seq 5 1 2 3 4 5 seq 3 1 0 1 2 3 seq 0 2 0 1 2 seq 2 2 2 1 0 1 2 seq 0 2 10 0 2 4 6 8 10 seq 0 2 5 0 2 4 Math Functions All math functions operate on int64 values unless specified otherwise The following math functions are available add add add1 add1 ceil ceil div div floor floor len len max max min min mod mod mul mul round round and sub sub add Sum numbers with add Accepts two or more inputs add 1 2 3 add1 To increment by 1 use add1 sub To subtract use sub div Perform integer division with div mod Modulo with mod mul Multiply with mul Accepts two or more inputs mul 1 2 3 max Return the largest of a series of integers This will return 3 max 1 2 3 min Return the smallest of a series of integers min 1 2 3 will return 1 len Returns the length of the argument as an integer len Arg Float Math Functions All math functions operate on float64 values addf Sum numbers with addf This will return 5 5 addf 1 5 2 2 add1f To increment by 1 use add1f subf To subtract use subf This is equivalent to 7 5 2 3 and will return 2 5 subf 7 5 2 3 divf Perform integer division with divf This is equivalent to 10 2 4 and will return 1 25 divf 10 2 4 mulf Multiply with mulf This will return 6 mulf 1 5 2 2 maxf Return the largest of a series of floats This will return 3 maxf 1 2 5 3 minf Return the smallest of a series of floats This will return 1 5 minf 1 5 2 3 floor Returns the greatest float value less than or equal to input value floor 123 9999 will return 123 0 ceil Returns the greatest float value greater than or equal to input value ceil 123 001 will return 124 0 round Returns a float value with the remainder rounded to the given number to digits after the decimal point round 123 555555 3 will return 123 556 Network Functions Helm has a single network function getHostByName The getHostByName receives a domain name and returns the ip address getHostByName www google com would return the corresponding ip address of www google com File Path Functions While Helm template functions do not grant access to the filesystem they do provide functions for working with strings that follow file path conventions Those include base base clean clean dir dir ext ext and isAbs isabs base Return the last element of a path base foo bar baz The above prints baz dir Return the directory stripping the last part of the path So dir foo bar baz returns foo bar clean Clean up a path clean foo bar baz The above resolves the and returns foo baz ext Return the file extension ext foo bar The above returns bar isAbs To check whether a file path is absolute use isAbs Reflection Functions Helm provides rudimentary reflection tools These help advanced template developers understand the underlying Go type information for a particular value Helm is written in Go and is strongly typed The type system applies within templates Go has several primitive kinds like string slice int64 and bool Go has an open type system that allows developers to create their own types Helm provides a set of functions for each via kind functions kind functions and type functions type functions A deepEqual deepequal function is also provided to compare to values Kind Functions There are two Kind functions kindOf returns the kind of an object kindOf hello The above would return string For simple tests like in if blocks the kindIs function will let you verify that a value is a particular kind kindIs int 123 The above will return true Type Functions Types are slightly harder to work with so there are three different functions typeOf returns the underlying type of a value typeOf foo typeIs is like kindIs but for types typeIs io Buffer myVal typeIsLike works as typeIs except that it also dereferences pointers Note None of these can test whether or not something implements a given interface since doing so would require compiling the interface in ahead of time deepEqual deepEqual returns true if two values are deeply equal https golang org pkg reflect DeepEqual Works for non primitive types as well compared to the built in eq deepEqual list 1 2 3 list 1 2 3 The above will return true Semantic Version Functions Some version schemes are easily parseable and comparable Helm provides functions for working with SemVer 2 http semver org versions These include semver semver and semverCompare semvercompare Below you will also find details on using ranges for comparisons semver The semver function parses a string into a Semantic Version version semver 1 2 3 alpha 1 123 If the parser fails it will cause template execution to halt with an error At this point version is a pointer to a Version object with the following properties version Major The major number 1 above version Minor The minor number 2 above version Patch The patch number 3 above version Prerelease The prerelease alpha 1 above version Metadata The build metadata 123 above version Original The original version as a string Additionally you can compare a Version to another version using the Compare function semver 1 4 3 semver 1 2 3 Compare The above will return 1 The return values are 1 if the given semver is greater than the semver whose Compare method was called 1 if the version who s Compare function was called is greater 0 if they are the same version Note that in SemVer the Metadata field is not compared during version comparison operations semverCompare A more robust comparison function is provided as semverCompare This version supports version ranges semverCompare 1 2 3 1 2 3 checks for an exact match semverCompare 1 2 0 1 2 3 checks that the major and minor versions match and that the patch number of the second version is greater than or equal to the first parameter The SemVer functions use the Masterminds semver library https github com Masterminds semver from the creators of Sprig Basic Comparisons There are two elements to the comparisons First a comparison string is a list of space or comma separated AND comparisons These are then separated by OR comparisons For example 1 2 3 0 0 4 2 3 is looking for a comparison that s greater than or equal to 1 2 and less than 3 0 0 or is greater than or equal to 4 2 3 The basic comparisons are equal aliased to no operator not equal greater than less than greater than or equal to less than or equal to Working With Prerelease Versions Pre releases for those not familiar with them are used for software releases prior to stable or generally available releases Examples of prereleases include development alpha beta and release candidate releases A prerelease may be a version such as 1 2 3 beta 1 while the stable release would be 1 2 3 In the order of precedence prereleases come before their associated releases In this example 1 2 3 beta 1 1 2 3 According to the Semantic Version specification prereleases may not be API compliant with their release counterpart It says A pre release version indicates that the version is unstable and might not satisfy the intended compatibility requirements as denoted by its associated normal version SemVer comparisons using constraints without a prerelease comparator will skip prerelease versions For example 1 2 3 will skip prereleases when looking at a list of releases while 1 2 3 0 will evaluate and find prereleases The reason for the 0 as a pre release version in the example comparison is because pre releases can only contain ASCII alphanumerics and hyphens along with separators per the spec Sorting happens in ASCII sort order again per the spec The lowest character is a 0 in ASCII sort order see an ASCII Table http www asciitable com Understanding ASCII sort ordering is important because A Z comes before a z That means 1 2 3 BETA will return 1 2 3 alpha What you might expect from case sensitivity doesn t apply here This is due to ASCII sort ordering which is what the spec specifies Hyphen Range Comparisons There are multiple methods to handle ranges and the first is hyphens ranges These look like 1 2 1 4 5 which is equivalent to 1 2 1 4 5 2 3 4 4 5 which is equivalent to 2 3 4 4 5 Wildcards In Comparisons The x X and characters can be used as a wildcard character This works for all comparison operators When used on the operator it falls back to the patch level comparison see tilde below For example 1 2 x is equivalent to 1 2 0 1 3 0 1 2 x is equivalent to 1 2 0 2 x is equivalent to 3 is equivalent to 0 0 0 Tilde Range Comparisons Patch The tilde comparison operator is for patch level ranges when a minor version is specified and major level changes when the minor number is missing For example 1 2 3 is equivalent to 1 2 3 1 3 0 1 is equivalent to 1 2 2 3 is equivalent to 2 3 2 4 1 2 x is equivalent to 1 2 0 1 3 0 1 x is equivalent to 1 2 Caret Range Comparisons Major The caret comparison operator is for major level changes once a stable 1 0 0 release has occurred Prior to a 1 0 0 release the minor versions acts as the API stability level This is useful when comparisons of API versions as a major change is API breaking For example 1 2 3 is equivalent to 1 2 3 2 0 0 1 2 x is equivalent to 1 2 0 2 0 0 2 3 is equivalent to 2 3 3 2 x is equivalent to 2 0 0 3 0 2 3 is equivalent to 0 2 3 0 3 0 0 2 is equivalent to 0 2 0 0 3 0 0 0 3 is equivalent to 0 0 3 0 0 4 0 0 is equivalent to 0 0 0 0 1 0 0 is equivalent to 0 0 0 1 0 0 URL Functions Helm includes the urlParse urlparse urlJoin urljoin and urlquery urlquery functions enabling you to work with URL parts urlParse Parses string for URL and produces dict with URL parts urlParse http admin secret server com 8080 api list false anchor The above returns a dict containing URL object yaml scheme http host server com 8080 path api query list false opaque nil fragment anchor userinfo admin secret This is implemented used the URL packages from the Go standard library For more info check https golang org pkg net url URL urlJoin Joins map produced by urlParse to produce URL string urlJoin dict fragment fragment host host 80 path path query query scheme http The above returns the following string http host 80 path query fragment urlquery Returns the escaped version of the value passed in as an argument so that it is suitable for embedding in the query portion of a URL var urlquery string for query UUID Functions Helm can generate UUID v4 universally unique IDs uuidv4 The above returns a new UUID of the v4 randomly generated type Kubernetes and Chart Functions Helm includes functions for working with Kubernetes including Capabilities APIVersions Has capabilitiesapiversionshas Files file functions and lookup lookup lookup lookup is used to look up resource in a running cluster When used with the helm template command it always returns an empty response You can find more detail in the documentation on the lookup function functions and pipelines md using the lookup function Capabilities APIVersions Has Returns if an API version or resource is available in a cluster Capabilities APIVersions Has apps v1 Capabilities APIVersions Has apps v1 Deployment More information is available on the built in object documentation builtin objects md File Functions There are several functions that enable you to get to non special files within a chart For example to access application configuration files These are documented in Accessing Files Inside Templates accessing files md Note the documentation for many of these functions come from Sprig https github com Masterminds sprig Sprig is a template function library available to Go applications |
helm variables In templates they are less frequently used But we will see how to With functions pipelines objects and control structures under our belts we title Variables use them to simplify code and to make better use of and weight 8 Using variables in templates can turn to one of the more basic ideas in many programming languages | ---
title: "Variables"
description: "Using variables in templates."
weight: 8
---
With functions, pipelines, objects, and control structures under our belts, we
can turn to one of the more basic ideas in many programming languages:
variables. In templates, they are less frequently used. But we will see how to
use them to simplify code, and to make better use of `with` and `range`.
In an earlier example, we saw that this code will fail:
```yaml
drink:
food:
release:
```
`Release.Name` is not inside of the scope that's restricted in the `with` block.
One way to work around scoping issues is to assign objects to variables that can
be accessed without respect to the present scope.
In Helm templates, a variable is a named reference to another object. It follows
the form `$name`. Variables are assigned with a special assignment operator:
`:=`. We can rewrite the above to use a variable for `Release.Name`.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
drink:
food:
release:
```
Notice that before we start the `with` block, we assign `$relname :=
.Release.Name`. Now inside of the `with` block, the `$relname` variable still
points to the release name.
Running that will produce this:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: viable-badger-configmap
data:
myvalue: "Hello World"
drink: "coffee"
food: "PIZZA"
release: viable-badger
```
Variables are particularly useful in `range` loops. They can be used on
list-like objects to capture both the index and the value:
```yaml
toppings: |-
:
```
Note that `range` comes first, then the variables, then the assignment operator,
then the list. This will assign the integer index (starting from zero) to
`$index` and the value to `$topping`. Running it will produce:
```yaml
toppings: |-
0: mushrooms
1: cheese
2: peppers
3: onions
```
For data structures that have both a key and a value, we can use `range` to get
both. For example, we can loop through `.Values.favorite` like this:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
data:
myvalue: "Hello World"
:
```
Now on the first iteration, `$key` will be `drink` and `$val` will be `coffee`,
and on the second, `$key` will be `food` and `$val` will be `pizza`. Running the
above will generate this:
```yaml
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: eager-rabbit-configmap
data:
myvalue: "Hello World"
drink: "coffee"
food: "pizza"
```
Variables are normally not "global". They are scoped to the block in which they
are declared. Earlier, we assigned `$relname` in the top level of the template.
That variable will be in scope for the entire template. But in our last example,
`$key` and `$val` will only be in scope inside of the ``
block.
However, there is one variable that is always global - `$` - this variable will
always point to the root context. This can be very useful when you are looping
in a range and you need to know the chart's release name.
An example illustrating this:
```yaml
apiVersion: v1
kind: Secret
metadata:
name:
labels:
# Many helm templates would use `.` below, but that will not work,
# however `$` will work here
app.kubernetes.io/name:
# I cannot reference .Chart.Name, but I can do $.Chart.Name
helm.sh/chart: "-"
app.kubernetes.io/instance: ""
# Value from appVersion in Chart.yaml
app.kubernetes.io/version: ""
app.kubernetes.io/managed-by: ""
type: kubernetes.io/tls
data:
tls.crt:
tls.key:
---
```
So far we have looked at just one template declared in just one file. But one of
the powerful features of the Helm template language is its ability to declare
multiple templates and use them together. We'll turn to that in the next
section. | helm | title Variables description Using variables in templates weight 8 With functions pipelines objects and control structures under our belts we can turn to one of the more basic ideas in many programming languages variables In templates they are less frequently used But we will see how to use them to simplify code and to make better use of with and range In an earlier example we saw that this code will fail yaml drink food release Release Name is not inside of the scope that s restricted in the with block One way to work around scoping issues is to assign objects to variables that can be accessed without respect to the present scope In Helm templates a variable is a named reference to another object It follows the form name Variables are assigned with a special assignment operator We can rewrite the above to use a variable for Release Name yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World drink food release Notice that before we start the with block we assign relname Release Name Now inside of the with block the relname variable still points to the release name Running that will produce this yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name viable badger configmap data myvalue Hello World drink coffee food PIZZA release viable badger Variables are particularly useful in range loops They can be used on list like objects to capture both the index and the value yaml toppings Note that range comes first then the variables then the assignment operator then the list This will assign the integer index starting from zero to index and the value to topping Running it will produce yaml toppings 0 mushrooms 1 cheese 2 peppers 3 onions For data structures that have both a key and a value we can use range to get both For example we can loop through Values favorite like this yaml apiVersion v1 kind ConfigMap metadata name configmap data myvalue Hello World Now on the first iteration key will be drink and val will be coffee and on the second key will be food and val will be pizza Running the above will generate this yaml Source mychart templates configmap yaml apiVersion v1 kind ConfigMap metadata name eager rabbit configmap data myvalue Hello World drink coffee food pizza Variables are normally not global They are scoped to the block in which they are declared Earlier we assigned relname in the top level of the template That variable will be in scope for the entire template But in our last example key and val will only be in scope inside of the block However there is one variable that is always global this variable will always point to the root context This can be very useful when you are looping in a range and you need to know the chart s release name An example illustrating this yaml apiVersion v1 kind Secret metadata name labels Many helm templates would use below but that will not work however will work here app kubernetes io name I cannot reference Chart Name but I can do Chart Name helm sh chart app kubernetes io instance Value from appVersion in Chart yaml app kubernetes io version app kubernetes io managed by type kubernetes io tls data tls crt tls key So far we have looked at just one template declared in just one file But one of the powerful features of the Helm template language is its ability to declare multiple templates and use them together We ll turn to that in the next section |
helm Terms used to describe components of Helm s architecture title Glossary Glossary weight 10 Chart | ---
title: "Glossary"
description: "Terms used to describe components of Helm's architecture."
weight: 10
---
# Glossary
## Chart
A Helm package that contains information sufficient for installing a set of
Kubernetes resources into a Kubernetes cluster.
Charts contain a `Chart.yaml` file as well as templates, default values
(`values.yaml`), and dependencies.
Charts are developed in a well-defined directory structure, and then packaged
into an archive format called a _chart archive_.
## Chart Archive
A _chart archive_ is a tarred and gzipped (and optionally signed) chart.
## Chart Dependency (Subcharts)
Charts may depend upon other charts. There are two ways a dependency may occur:
- Soft dependency: A chart may simply not function without another chart being
installed in a cluster. Helm does not provide tooling for this case. In this
case, dependencies may be managed separately.
- Hard dependency: A chart may contain (inside of its `charts/` directory)
another chart upon which it depends. In this case, installing the chart will
install all of its dependencies. In this case, a chart and its dependencies
are managed as a collection.
When a chart is packaged (via `helm package`) all of its hard dependencies are
bundled with it.
## Chart Version
Charts are versioned according to the [SemVer 2 spec](https://semver.org). A
version number is required on every chart.
## Chart.yaml
Information about a chart is stored in a special file called `Chart.yaml`. Every
chart must have this file.
## Helm (and helm)
Helm is the package manager for Kubernetes. As an operating system package
manager makes it easy to install tools on an OS, Helm makes it easy to install
applications and resources into Kubernetes clusters.
While _Helm_ is the name of the project, the command line client is also named
`helm`. By convention, when speaking of the project, _Helm_ is capitalized. When
speaking of the client, _helm_ is in lowercase.
## Helm Configuration Files (XDG)
Helm stores its configuration files in XDG directories. These directories are
created the first time `helm` is run.
## Kube Config (KUBECONFIG)
The Helm client learns about Kubernetes clusters by using files in the _Kube
config_ file format. By default, Helm attempts to find this file in the place
where `kubectl` creates it (`$HOME/.kube/config`).
## Lint (Linting)
To _lint_ a chart is to validate that it follows the conventions and
requirements of the Helm chart standard. Helm provides tools to do this, notably
the `helm lint` command.
## Provenance (Provenance file)
Helm charts may be accompanied by a _provenance file_ which provides information
about where the chart came from and what it contains.
Provenance files are one part of the Helm security story. A provenance contains
a cryptographic hash of the chart archive file, the Chart.yaml data, and a
signature block (an OpenPGP "clearsign" block). When coupled with a keychain,
this provides chart users with the ability to:
- Validate that a chart was signed by a trusted party
- Validate that the chart file has not been tampered with
- Validate the contents of a chart metadata (`Chart.yaml`)
- Quickly match a chart to its provenance data
Provenance files have the `.prov` extension, and can be served from a chart
repository server or any other HTTP server.
## Release
When a chart is installed, the Helm library creates a _release_ to track that
installation.
A single chart may be installed many times into the same cluster, and create
many different releases. For example, one can install three PostgreSQL databases
by running `helm install` three times with a different release name.
## Release Number (Release Version)
A single release can be updated multiple times. A sequential counter is used to
track releases as they change. After a first `helm install`, a release will have
_release number_ 1. Each time a release is upgraded or rolled back, the release
number will be incremented.
## Rollback
A release can be upgraded to a newer chart or configuration. But since release
history is stored, a release can also be _rolled back_ to a previous release
number. This is done with the `helm rollback` command.
Importantly, a rolled back release will receive a new release number.
| Operation | Release Number |
|------------|------------------------------------------------------|
| install | release 1 |
| upgrade | release 2 |
| upgrade | release 3 |
| rollback 1 | release 4 (but running the same config as release 1) |
The above table illustrates how release numbers increment across install,
upgrade, and rollback.
## Helm Library (or SDK)
The Helm Library (or SDK) refers to the Go code that interacts directly with the
Kubernetes API server to install, upgrade, query, and remove Kubernetes
resources. It can be imported into a project to use Helm as a client library
instead of a CLI.
## Repository (Repo, Chart Repository)
Helm charts may be stored on dedicated HTTP servers called _chart repositories_
(_repositories_, or just _repos_).
A chart repository server is a simple HTTP server that can serve an `index.yaml`
file that describes a batch of charts, and provides information on where each
chart can be downloaded from. (Many chart repositories serve the charts as well
as the `index.yaml` file.)
A Helm client can point to zero or more chart repositories. By default, Helm
clients are not configured with any chart repositories. Chart repositories can
be added at any time using the `helm repo add` command.
## Chart Registry (OCI-based Registry)
A Helm Chart Registry is an [OCI-based](https://opencontainers.org/about/overview/) storage and distribution system that is used to host and share Helm chart packages. For more information, see the [Helm documentation on registries](https://helm.sh/docs/topics/registries/).
## Values (Values Files, values.yaml)
Values provide a way to override template defaults with your own information.
Helm Charts are "parameterized", which means the chart developer may expose
configuration that can be overridden at installation time. For example, a chart
may expose a `username` field that allows setting a user name for a service.
These exposed variables are called _values_ in Helm parlance.
Values can be set during `helm install` and `helm upgrade` operations, either by
passing them in directly, or by using a `values.yaml` file. | helm | title Glossary description Terms used to describe components of Helm s architecture weight 10 Glossary Chart A Helm package that contains information sufficient for installing a set of Kubernetes resources into a Kubernetes cluster Charts contain a Chart yaml file as well as templates default values values yaml and dependencies Charts are developed in a well defined directory structure and then packaged into an archive format called a chart archive Chart Archive A chart archive is a tarred and gzipped and optionally signed chart Chart Dependency Subcharts Charts may depend upon other charts There are two ways a dependency may occur Soft dependency A chart may simply not function without another chart being installed in a cluster Helm does not provide tooling for this case In this case dependencies may be managed separately Hard dependency A chart may contain inside of its charts directory another chart upon which it depends In this case installing the chart will install all of its dependencies In this case a chart and its dependencies are managed as a collection When a chart is packaged via helm package all of its hard dependencies are bundled with it Chart Version Charts are versioned according to the SemVer 2 spec https semver org A version number is required on every chart Chart yaml Information about a chart is stored in a special file called Chart yaml Every chart must have this file Helm and helm Helm is the package manager for Kubernetes As an operating system package manager makes it easy to install tools on an OS Helm makes it easy to install applications and resources into Kubernetes clusters While Helm is the name of the project the command line client is also named helm By convention when speaking of the project Helm is capitalized When speaking of the client helm is in lowercase Helm Configuration Files XDG Helm stores its configuration files in XDG directories These directories are created the first time helm is run Kube Config KUBECONFIG The Helm client learns about Kubernetes clusters by using files in the Kube config file format By default Helm attempts to find this file in the place where kubectl creates it HOME kube config Lint Linting To lint a chart is to validate that it follows the conventions and requirements of the Helm chart standard Helm provides tools to do this notably the helm lint command Provenance Provenance file Helm charts may be accompanied by a provenance file which provides information about where the chart came from and what it contains Provenance files are one part of the Helm security story A provenance contains a cryptographic hash of the chart archive file the Chart yaml data and a signature block an OpenPGP clearsign block When coupled with a keychain this provides chart users with the ability to Validate that a chart was signed by a trusted party Validate that the chart file has not been tampered with Validate the contents of a chart metadata Chart yaml Quickly match a chart to its provenance data Provenance files have the prov extension and can be served from a chart repository server or any other HTTP server Release When a chart is installed the Helm library creates a release to track that installation A single chart may be installed many times into the same cluster and create many different releases For example one can install three PostgreSQL databases by running helm install three times with a different release name Release Number Release Version A single release can be updated multiple times A sequential counter is used to track releases as they change After a first helm install a release will have release number 1 Each time a release is upgraded or rolled back the release number will be incremented Rollback A release can be upgraded to a newer chart or configuration But since release history is stored a release can also be rolled back to a previous release number This is done with the helm rollback command Importantly a rolled back release will receive a new release number Operation Release Number install release 1 upgrade release 2 upgrade release 3 rollback 1 release 4 but running the same config as release 1 The above table illustrates how release numbers increment across install upgrade and rollback Helm Library or SDK The Helm Library or SDK refers to the Go code that interacts directly with the Kubernetes API server to install upgrade query and remove Kubernetes resources It can be imported into a project to use Helm as a client library instead of a CLI Repository Repo Chart Repository Helm charts may be stored on dedicated HTTP servers called chart repositories repositories or just repos A chart repository server is a simple HTTP server that can serve an index yaml file that describes a batch of charts and provides information on where each chart can be downloaded from Many chart repositories serve the charts as well as the index yaml file A Helm client can point to zero or more chart repositories By default Helm clients are not configured with any chart repositories Chart repositories can be added at any time using the helm repo add command Chart Registry OCI based Registry A Helm Chart Registry is an OCI based https opencontainers org about overview storage and distribution system that is used to host and share Helm chart packages For more information see the Helm documentation on registries https helm sh docs topics registries Values Values Files values yaml Values provide a way to override template defaults with your own information Helm Charts are parameterized which means the chart developer may expose configuration that can be overridden at installation time For example a chart may expose a username field that allows setting a user name for a service These exposed variables are called values in Helm parlance Values can be set during helm install and helm upgrade operations either by passing them in directly or by using a values yaml file |
helm Describe how to use Chart Releaser Action to automate releasing charts through GitHub pages workflow to turn a GitHub project into a self hosted Helm chart repo using This guide describes how to use Chart Releaser releasing charts through GitHub pages Chart Releaser Action is a GitHub Action Action https github com marketplace actions helm chart releaser to automate title Chart Releaser Action to Automate GitHub Page Charts weight 3 | ---
title: "Chart Releaser Action to Automate GitHub Page Charts"
description: "Describe how to use Chart Releaser Action to automate releasing charts through GitHub pages."
weight: 3
---
This guide describes how to use [Chart Releaser
Action](https://github.com/marketplace/actions/helm-chart-releaser) to automate
releasing charts through GitHub pages. Chart Releaser Action is a GitHub Action
workflow to turn a GitHub project into a self-hosted Helm chart repo, using
[helm/chart-releaser](https://github.com/helm/chart-releaser) CLI tool.
## Repository Changes
Create a Git repository under your GitHub organization. You could give the name
of the repository as `helm-charts`, though other names are also acceptable. The
sources of all the charts can be placed under the `main` branch. The charts
should be placed under `/charts` directory at the top-level of the directory
tree.
There should be another branch named `gh-pages` to publish the charts. The
changes to that branch will be automatically created by the Chart Releaser
Action described here. However, you can create that `gh-branch` and add
`README.md` file, which is going to be visible to the users visiting the page.
You can add instruction in the `README.md` for charts installation like this
(replace `<alias>`, `<orgname>`, and `<chart-name>`):
```
## Usage
[Helm](https://helm.sh) must be installed to use the charts. Please refer to
Helm's [documentation](https://helm.sh/docs) to get started.
Once Helm has been set up correctly, add the repo as follows:
helm repo add <alias> https://<orgname>.github.io/helm-charts
If you had already added this repo earlier, run `helm repo update` to retrieve
the latest versions of the packages. You can then run `helm search repo
<alias>` to see the charts.
To install the <chart-name> chart:
helm install my-<chart-name> <alias>/<chart-name>
To uninstall the chart:
helm delete my-<chart-name>
```
The charts will be published to a website with URL like this:
https://<orgname>.github.io/helm-charts
## GitHub Actions Workflow
Create GitHub Actions workflow file in the `main` branch at
`.github/workflows/release.yml`
```
name: Release Charts
on:
push:
branches:
- main
jobs:
release:
permissions:
contents: write
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "[email protected]"
- name: Run chart-releaser
uses: helm/[email protected]
env:
CR_TOKEN: "$"
```
The above configuration uses
[@helm/chart-releaser-action](https://github.com/helm/chart-releaser-action) to
turn your GitHub project into a self-hosted Helm chart repo. It does this -
during every push to main - by checking each chart in your project, and whenever
there's a new chart version, creates a corresponding GitHub release named for
the chart version, adds Helm chart artifacts to the release, and creates or
updates an `index.yaml` file with metadata about those releases, which is then
hosted on GitHub pages.
The Chart Releaser Action version number used in the above example is `v1.6.0`.
You can change it to the [latest available
version](https://github.com/helm/chart-releaser-action/releases).
Note: The Chart Releaser Action is almost always used in tandem with the [Helm Testing
Action](https://github.com/marketplace/actions/helm-chart-testing) and [Kind
Action](https://github.com/marketplace/actions/kind-cluster). | helm | title Chart Releaser Action to Automate GitHub Page Charts description Describe how to use Chart Releaser Action to automate releasing charts through GitHub pages weight 3 This guide describes how to use Chart Releaser Action https github com marketplace actions helm chart releaser to automate releasing charts through GitHub pages Chart Releaser Action is a GitHub Action workflow to turn a GitHub project into a self hosted Helm chart repo using helm chart releaser https github com helm chart releaser CLI tool Repository Changes Create a Git repository under your GitHub organization You could give the name of the repository as helm charts though other names are also acceptable The sources of all the charts can be placed under the main branch The charts should be placed under charts directory at the top level of the directory tree There should be another branch named gh pages to publish the charts The changes to that branch will be automatically created by the Chart Releaser Action described here However you can create that gh branch and add README md file which is going to be visible to the users visiting the page You can add instruction in the README md for charts installation like this replace alias orgname and chart name Usage Helm https helm sh must be installed to use the charts Please refer to Helm s documentation https helm sh docs to get started Once Helm has been set up correctly add the repo as follows helm repo add alias https orgname github io helm charts If you had already added this repo earlier run helm repo update to retrieve the latest versions of the packages You can then run helm search repo alias to see the charts To install the chart name chart helm install my chart name alias chart name To uninstall the chart helm delete my chart name The charts will be published to a website with URL like this https orgname github io helm charts GitHub Actions Workflow Create GitHub Actions workflow file in the main branch at github workflows release yml name Release Charts on push branches main jobs release permissions contents write runs on ubuntu latest steps name Checkout uses actions checkout v4 with fetch depth 0 name Configure Git run git config user name GITHUB ACTOR git config user email GITHUB ACTOR users noreply github com name Run chart releaser uses helm chart releaser action v1 6 0 env CR TOKEN The above configuration uses helm chart releaser action https github com helm chart releaser action to turn your GitHub project into a self hosted Helm chart repo It does this during every push to main by checking each chart in your project and whenever there s a new chart version creates a corresponding GitHub release named for the chart version adds Helm chart artifacts to the release and creates or updates an index yaml file with metadata about those releases which is then hosted on GitHub pages The Chart Releaser Action version number used in the above example is v1 6 0 You can change it to the latest available version https github com helm chart releaser action releases Note The Chart Releaser Action is almost always used in tandem with the Helm Testing Action https github com marketplace actions helm chart testing and Kind Action https github com marketplace actions kind cluster |
helm Covers some of the tips and tricks Helm chart developers have learned while building production quality charts aliases docs chartstipsandtricks while building production quality charts This guide covers some of the tips and tricks Helm chart developers have learned title Chart Development Tips and Tricks weight 1 | ---
title: "Chart Development Tips and Tricks"
description: "Covers some of the tips and tricks Helm chart developers have learned while building production-quality charts."
weight: 1
aliases: ["/docs/charts_tips_and_tricks/"]
---
This guide covers some of the tips and tricks Helm chart developers have learned
while building production-quality charts.
## Know Your Template Functions
Helm uses [Go templates](https://godoc.org/text/template) for templating your
resource files. While Go ships several built-in functions, we have added many
others.
First, we added all of the functions in the [Sprig
library](https://masterminds.github.io/sprig/), except `env` and `expandenv`, for security reasons.
We also added two special template functions: `include` and `required`. The
`include` function allows you to bring in another template, and then pass the
results to other template functions.
For example, this template snippet includes a template called `mytpl`, then
lowercases the result, then wraps that in double quotes.
```yaml
value:
```
The `required` function allows you to declare a particular values entry as
required for template rendering. If the value is empty, the template rendering
will fail with a user submitted error message.
The following example of the `required` function declares an entry for
`.Values.who` is required, and will print an error message when that entry is
missing:
```yaml
value:
```
## Quote Strings, Don't Quote Integers
When you are working with string data, you are always safer quoting the strings
than leaving them as bare words:
```yaml
name:
```
But when working with integers _do not quote the values._ That can, in many
cases, cause parsing errors inside of Kubernetes.
```yaml
port:
```
This remark does not apply to env variables values which are expected to be
string, even if they represent integers:
```yaml
env:
- name: HOST
value: "http://host"
- name: PORT
value: "1234"
```
## Using the 'include' Function
Go provides a way of including one template in another using a built-in
`template` directive. However, the built-in function cannot be used in Go
template pipelines.
To make it possible to include a template, and then perform an operation on that
template's output, Helm has a special `include` function:
```
```
The above includes a template called `toYaml`, passes it `$value`, and then
passes the output of that template to the `indent` function.
Because YAML ascribes significance to indentation levels and whitespace, this is
one great way to include snippets of code, but handle indentation in a relevant
context.
## Using the 'required' function
Go provides a way for setting template options to control behavior when a map is
indexed with a key that's not present in the map. This is typically set with
`template.Options("missingkey=option")`, where `option` can be `default`,
`zero`, or `error`. While setting this option to error will stop execution with
an error, this would apply to every missing key in the map. There may be
situations where a chart developer wants to enforce this behavior for select
values in the `values.yaml` file.
The `required` function gives developers the ability to declare a value entry as
required for template rendering. If the entry is empty in `values.yaml`, the
template will not render and will return an error message supplied by the
developer.
For example:
```
```
The above will render the template when `.Values.foo` is defined, but will fail
to render and exit when `.Values.foo` is undefined.
## Using the 'tpl' Function
The `tpl` function allows developers to evaluate strings as templates inside a
template. This is useful to pass a template string as a value to a chart or
render external configuration files. Syntax: ``
Examples:
```yaml
# values
template: ""
name: "Tom"
# template
# output
Tom
```
Rendering an external configuration file:
```yaml
# external configuration file conf/app.conf
firstName=
lastName=
# values
firstName: Peter
lastName: Parker
# template
# output
firstName=Peter
lastName=Parker
```
## Creating Image Pull Secrets
Image pull secrets are essentially a combination of _registry_, _username_, and
_password_. You may need them in an application you are deploying, but to
create them requires running `base64` a couple of times. We can write a helper
template to compose the Docker configuration file for use as the Secret's
payload. Here is an example:
First, assume that the credentials are defined in the `values.yaml` file like
so:
```yaml
imageCredentials:
registry: quay.io
username: someone
password: sillyness
email: [email protected]
```
We then define our helper template as follows:
```
}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
```
Finally, we use the helper template in a larger template to create the Secret
manifest:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson:
```
## Automatically Roll Deployments
Often times ConfigMaps or Secrets are injected as configuration files in
containers or there are other external dependency changes that require rolling
pods. Depending on the application a restart may be required should those be
updated with a subsequent `helm upgrade`, but if the deployment spec itself
didn't change the application keeps running with the old configuration resulting
in an inconsistent deployment.
The `sha256sum` function can be used to ensure a deployment's annotation section
is updated if another file changes:
```yaml
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config:
[...]
```
NOTE: If you're adding this to a library chart you won't be able to access your
file in `$.Template.BasePath`. Instead you can reference your definition with
``.
In the event you always want to roll your deployment, you can use a similar
annotation step as above, instead replacing with a random string so it always
changes and causes the deployment to roll:
```yaml
kind: Deployment
spec:
template:
metadata:
annotations:
rollme:
[...]
```
Each invocation of the template function will generate a unique random string.
This means that if it's necessary to sync the random strings used by multiple
resources, all relevant resources will need to be in the same template file.
Both of these methods allow your Deployment to leverage the built in update
strategy logic to avoid taking downtime.
NOTE: In the past we recommended using the `--recreate-pods` flag as another
option. This flag has been marked as deprecated in Helm 3 in favor of the more
declarative method above.
## Tell Helm Not To Uninstall a Resource
Sometimes there are resources that should not be uninstalled when Helm runs a
`helm uninstall`. Chart developers can add an annotation to a resource to
prevent it from being uninstalled.
```yaml
kind: Secret
metadata:
annotations:
helm.sh/resource-policy: keep
[...]
```
The annotation `helm.sh/resource-policy: keep` instructs Helm to skip deleting
this resource when a helm operation (such as `helm uninstall`, `helm upgrade` or
`helm rollback`) would result in its deletion. _However_, this resource becomes
orphaned. Helm will no longer manage it in any way. This can lead to problems if
using `helm install --replace` on a release that has already been uninstalled,
but has kept resources.
## Using "Partials" and Template Includes
Sometimes you want to create some reusable parts in your chart, whether they're
blocks or template partials. And often, it's cleaner to keep these in their own
files.
In the `templates/` directory, any file that begins with an underscore(`_`) is
not expected to output a Kubernetes manifest file. So by convention, helper
templates and partials are placed in a `_helpers.tpl` file.
## Complex Charts with Many Dependencies
Many of the charts in the CNCF [Artifact
Hub](https://artifacthub.io/packages/search?kind=0) are "building blocks" for
creating more advanced applications. But charts may be used to create instances
of large-scale applications. In such cases, a single umbrella chart may have
multiple subcharts, each of which functions as a piece of the whole.
The current best practice for composing a complex application from discrete
parts is to create a top-level umbrella chart that exposes the global
configurations, and then use the `charts/` subdirectory to embed each of the
components.
## YAML is a Superset of JSON
According to the YAML specification, YAML is a superset of JSON. That means that
any valid JSON structure ought to be valid in YAML.
This has an advantage: Sometimes template developers may find it easier to
express a datastructure with a JSON-like syntax rather than deal with YAML's
whitespace sensitivity.
As a best practice, templates should follow a YAML-like syntax _unless_ the JSON
syntax substantially reduces the risk of a formatting issue.
## Be Careful with Generating Random Values
There are functions in Helm that allow you to generate random data,
cryptographic keys, and so on. These are fine to use. But be aware that during
upgrades, templates are re-executed. When a template run generates data that
differs from the last run, that will trigger an update of that resource.
## Install or Upgrade a Release with One Command
Helm provides a way to perform an install-or-upgrade as a single command. Use
`helm upgrade` with the `--install` command. This will cause Helm to see if the
release is already installed. If not, it will run an install. If it is, then the
existing release will be upgraded.
```console
$ helm upgrade --install <release name> --values <values file> <chart directory>
``` | helm | title Chart Development Tips and Tricks description Covers some of the tips and tricks Helm chart developers have learned while building production quality charts weight 1 aliases docs charts tips and tricks This guide covers some of the tips and tricks Helm chart developers have learned while building production quality charts Know Your Template Functions Helm uses Go templates https godoc org text template for templating your resource files While Go ships several built in functions we have added many others First we added all of the functions in the Sprig library https masterminds github io sprig except env and expandenv for security reasons We also added two special template functions include and required The include function allows you to bring in another template and then pass the results to other template functions For example this template snippet includes a template called mytpl then lowercases the result then wraps that in double quotes yaml value The required function allows you to declare a particular values entry as required for template rendering If the value is empty the template rendering will fail with a user submitted error message The following example of the required function declares an entry for Values who is required and will print an error message when that entry is missing yaml value Quote Strings Don t Quote Integers When you are working with string data you are always safer quoting the strings than leaving them as bare words yaml name But when working with integers do not quote the values That can in many cases cause parsing errors inside of Kubernetes yaml port This remark does not apply to env variables values which are expected to be string even if they represent integers yaml env name HOST value http host name PORT value 1234 Using the include Function Go provides a way of including one template in another using a built in template directive However the built in function cannot be used in Go template pipelines To make it possible to include a template and then perform an operation on that template s output Helm has a special include function The above includes a template called toYaml passes it value and then passes the output of that template to the indent function Because YAML ascribes significance to indentation levels and whitespace this is one great way to include snippets of code but handle indentation in a relevant context Using the required function Go provides a way for setting template options to control behavior when a map is indexed with a key that s not present in the map This is typically set with template Options missingkey option where option can be default zero or error While setting this option to error will stop execution with an error this would apply to every missing key in the map There may be situations where a chart developer wants to enforce this behavior for select values in the values yaml file The required function gives developers the ability to declare a value entry as required for template rendering If the entry is empty in values yaml the template will not render and will return an error message supplied by the developer For example The above will render the template when Values foo is defined but will fail to render and exit when Values foo is undefined Using the tpl Function The tpl function allows developers to evaluate strings as templates inside a template This is useful to pass a template string as a value to a chart or render external configuration files Syntax Examples yaml values template name Tom template output Tom Rendering an external configuration file yaml external configuration file conf app conf firstName lastName values firstName Peter lastName Parker template output firstName Peter lastName Parker Creating Image Pull Secrets Image pull secrets are essentially a combination of registry username and password You may need them in an application you are deploying but to create them requires running base64 a couple of times We can write a helper template to compose the Docker configuration file for use as the Secret s payload Here is an example First assume that the credentials are defined in the values yaml file like so yaml imageCredentials registry quay io username someone password sillyness email someone host com We then define our helper template as follows registry username password email printf s s username password b64enc b64enc Finally we use the helper template in a larger template to create the Secret manifest yaml apiVersion v1 kind Secret metadata name myregistrykey type kubernetes io dockerconfigjson data dockerconfigjson Automatically Roll Deployments Often times ConfigMaps or Secrets are injected as configuration files in containers or there are other external dependency changes that require rolling pods Depending on the application a restart may be required should those be updated with a subsequent helm upgrade but if the deployment spec itself didn t change the application keeps running with the old configuration resulting in an inconsistent deployment The sha256sum function can be used to ensure a deployment s annotation section is updated if another file changes yaml kind Deployment spec template metadata annotations checksum config NOTE If you re adding this to a library chart you won t be able to access your file in Template BasePath Instead you can reference your definition with In the event you always want to roll your deployment you can use a similar annotation step as above instead replacing with a random string so it always changes and causes the deployment to roll yaml kind Deployment spec template metadata annotations rollme Each invocation of the template function will generate a unique random string This means that if it s necessary to sync the random strings used by multiple resources all relevant resources will need to be in the same template file Both of these methods allow your Deployment to leverage the built in update strategy logic to avoid taking downtime NOTE In the past we recommended using the recreate pods flag as another option This flag has been marked as deprecated in Helm 3 in favor of the more declarative method above Tell Helm Not To Uninstall a Resource Sometimes there are resources that should not be uninstalled when Helm runs a helm uninstall Chart developers can add an annotation to a resource to prevent it from being uninstalled yaml kind Secret metadata annotations helm sh resource policy keep The annotation helm sh resource policy keep instructs Helm to skip deleting this resource when a helm operation such as helm uninstall helm upgrade or helm rollback would result in its deletion However this resource becomes orphaned Helm will no longer manage it in any way This can lead to problems if using helm install replace on a release that has already been uninstalled but has kept resources Using Partials and Template Includes Sometimes you want to create some reusable parts in your chart whether they re blocks or template partials And often it s cleaner to keep these in their own files In the templates directory any file that begins with an underscore is not expected to output a Kubernetes manifest file So by convention helper templates and partials are placed in a helpers tpl file Complex Charts with Many Dependencies Many of the charts in the CNCF Artifact Hub https artifacthub io packages search kind 0 are building blocks for creating more advanced applications But charts may be used to create instances of large scale applications In such cases a single umbrella chart may have multiple subcharts each of which functions as a piece of the whole The current best practice for composing a complex application from discrete parts is to create a top level umbrella chart that exposes the global configurations and then use the charts subdirectory to embed each of the components YAML is a Superset of JSON According to the YAML specification YAML is a superset of JSON That means that any valid JSON structure ought to be valid in YAML This has an advantage Sometimes template developers may find it easier to express a datastructure with a JSON like syntax rather than deal with YAML s whitespace sensitivity As a best practice templates should follow a YAML like syntax unless the JSON syntax substantially reduces the risk of a formatting issue Be Careful with Generating Random Values There are functions in Helm that allow you to generate random data cryptographic keys and so on These are fine to use But be aware that during upgrades templates are re executed When a template run generates data that differs from the last run that will trigger an update of that resource Install or Upgrade a Release with One Command Helm provides a way to perform an install or upgrade as a single command Use helm upgrade with the install command This will cause Helm to see if the release is already installed If not it will run an install If it is then the existing release will be upgraded console helm upgrade install release name values values file chart directory |
helm aliases docs localization weight 5 Instructions for localizing the Helm documentation Getting Started title Localizing Helm Documentation This guide explains how to localize the Helm documentation | ---
title: "Localizing Helm Documentation"
description: "Instructions for localizing the Helm documentation."
aliases: ["/docs/localization/"]
weight: 5
---
This guide explains how to localize the Helm documentation.
## Getting Started
Contributions for translations use the same process as contributions for
documentation. Translations are supplied through [pull
requests](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests)
to the [helm-www](https://github.com/helm/helm-www) git repository and pull
requests are reviewed by the team that manages the website.
### Two-letter Language Code
Documentation is organized by the [ISO 639-1
standard](https://www.loc.gov/standards/iso639-2/php/code_list.php) for the
language codes. For example, the two-letter code for Korean is `ko`.
In content and configuration you will find the language code in use. Here are 3
examples:
- In the `content` directory the language codes are the subdirectories and the
localized content for the language is in each directory. Primarily in the
`docs` subdirectory of each language code directory.
- The `i18n` directory contains a configuration file for each language with
phrases used on the website. The files are named with the pattern `[LANG].toml`
where `[LANG]` is the two letter language code.
- In the top level `config.toml` file there is configuration for navigation and
other details organized by language code.
English, with a language code of `en`, is the default language and source for
translations.
### Fork, Branch, Change, Pull Request
To contribute translations start by [creating a
fork](https://help.github.com/en/github/getting-started-with-github/fork-a-repo)
of the [helm-www repository](https://github.com/helm/helm-www) on GitHub. You
will start by committing the changes to your fork.
By default your fork will be set to work on the default branch known as `main`.
Please use branches to develop your changes and create pull requests. If you are
unfamiliar with branches you can [read about them in the GitHub
documentation](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-branches).
Once you have a branch make changes to add translations and localize the content
to a language.
Note, Helm uses a [Developers Certificate of
Origin](https://developercertificate.org/). All commits need to have signoff.
When making a commit you can use the `-s` or `--signoff` flag to use your Git
configured name and email address to signoff on the commit. More details are
available in the
[CONTRIBUTING.md](https://github.com/helm/helm-www/blob/main/CONTRIBUTING.md#sign-your-work)
file.
When you are ready, create a [pull
request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests)
with the translation back to the helm-www repository.
Once a pull request has been created one of the maintainers will review it.
Details on that process are in the
[CONTRIBUTING.md](https://github.com/helm/helm-www/blob/main/CONTRIBUTING.md)
file.
## Translating Content
Localizing all of the Helm content is a large task. It is ok to start small. The
translations can be expanded over time.
### Starting A New Language
When starting a new language there is a minimum needed. This includes:
- Adding a `content/[LANG]/docs` directory containing an `_index.md` file. This
is the top level documentation landing page.
- Creating a `[LANG].toml` file in the `i18n` directory. Initially you can copy
the `en.toml` file as a starting point.
- Adding a section for the language to the `config.toml` file to expose the new
language. An existing language section can serve as a starting point.
### Translating
Translated content needs to reside in the `content/[LANG]/docs` directory. It
should have the same URL as the English source. For example, to translate the
intro into Korean it can be useful to copy the english source like:
```sh
mkdir -p content/ko/docs/intro
cp content/en/docs/intro/install.md content/ko/docs/intro/install.md
```
The content in the new file can then be translated into the other language.
Do not add an untranslated copy of an English file to `content/[LANG]/`.
Once a language exists on the site, any untranslated pages will redirect to
English automatically. Translation takes time, and you always want to be
translating the most current version of the docs, not an outdated fork.
Make sure you remove any `aliases` lines from the header section. A line like
`aliases: ["/docs/using_helm/"]` does not belong in the translations. Those
are redirections for old links which don't exist for new pages.
Note, translation tools can help with the process. This includes machine
generated translations. Machine generated translations should be edited or
otherwise reviewing for grammar and meaning by a native language speaker before
publishing.
## Navigating Between Languages

The site global
[config.toml](https://github.com/helm/helm-www/blob/main/config.toml#L83L89)
file is where language navigation is configured.
To add a new language, add a new set of parameters using the [two-letter
language code](./localization/#two-letter-language-code) defined above. Example:
```
# Korean
[languages.ko]
title = "Helm"
description = "Helm - The Kubernetes Package Manager."
contentDir = "content/ko"
languageName = "한국어 Korean"
weight = 1
```
## Resolving Internal Links
Translated content will sometimes include links to pages that only exist in
another language. This will result in site [build
errors](https://app.netlify.com/sites/helm-merge/deploys). Example:
```
12:45:31 PM: htmltest started at 12:45:30 on app
12:45:31 PM: ========================================================================
12:45:31 PM: ko/docs/chart_template_guide/accessing_files/index.html
12:45:31 PM: hash does not exist --- ko/docs/chart_template_guide/accessing_files/index.html --> #basic-example
12:45:31 PM: ✘✘✘ failed in 197.566561ms
12:45:31 PM: 1 error in 212 documents
```
To resolve this, you need to check your content for internal links.
* anchor links need to reflect the translated `id` value
* internal page links need to be fixed
For internal pages that do not exist _(or have not been translated yet)_, the
site will not build until a correction is made. As a fallback, the url can point
to another language where that content _does_ exist as follows:
`< relref path="/docs/topics/library_charts.md" lang="en" >`
See the [Hugo Docs on cross references between
languages](https://gohugo.io/content-management/cross-references/#link-to-another-language-version)
for more info. | helm | title Localizing Helm Documentation description Instructions for localizing the Helm documentation aliases docs localization weight 5 This guide explains how to localize the Helm documentation Getting Started Contributions for translations use the same process as contributions for documentation Translations are supplied through pull requests https help github com en github collaborating with issues and pull requests about pull requests to the helm www https github com helm helm www git repository and pull requests are reviewed by the team that manages the website Two letter Language Code Documentation is organized by the ISO 639 1 standard https www loc gov standards iso639 2 php code list php for the language codes For example the two letter code for Korean is ko In content and configuration you will find the language code in use Here are 3 examples In the content directory the language codes are the subdirectories and the localized content for the language is in each directory Primarily in the docs subdirectory of each language code directory The i18n directory contains a configuration file for each language with phrases used on the website The files are named with the pattern LANG toml where LANG is the two letter language code In the top level config toml file there is configuration for navigation and other details organized by language code English with a language code of en is the default language and source for translations Fork Branch Change Pull Request To contribute translations start by creating a fork https help github com en github getting started with github fork a repo of the helm www repository https github com helm helm www on GitHub You will start by committing the changes to your fork By default your fork will be set to work on the default branch known as main Please use branches to develop your changes and create pull requests If you are unfamiliar with branches you can read about them in the GitHub documentation https help github com en github collaborating with issues and pull requests about branches Once you have a branch make changes to add translations and localize the content to a language Note Helm uses a Developers Certificate of Origin https developercertificate org All commits need to have signoff When making a commit you can use the s or signoff flag to use your Git configured name and email address to signoff on the commit More details are available in the CONTRIBUTING md https github com helm helm www blob main CONTRIBUTING md sign your work file When you are ready create a pull request https help github com en github collaborating with issues and pull requests about pull requests with the translation back to the helm www repository Once a pull request has been created one of the maintainers will review it Details on that process are in the CONTRIBUTING md https github com helm helm www blob main CONTRIBUTING md file Translating Content Localizing all of the Helm content is a large task It is ok to start small The translations can be expanded over time Starting A New Language When starting a new language there is a minimum needed This includes Adding a content LANG docs directory containing an index md file This is the top level documentation landing page Creating a LANG toml file in the i18n directory Initially you can copy the en toml file as a starting point Adding a section for the language to the config toml file to expose the new language An existing language section can serve as a starting point Translating Translated content needs to reside in the content LANG docs directory It should have the same URL as the English source For example to translate the intro into Korean it can be useful to copy the english source like sh mkdir p content ko docs intro cp content en docs intro install md content ko docs intro install md The content in the new file can then be translated into the other language Do not add an untranslated copy of an English file to content LANG Once a language exists on the site any untranslated pages will redirect to English automatically Translation takes time and you always want to be translating the most current version of the docs not an outdated fork Make sure you remove any aliases lines from the header section A line like aliases docs using helm does not belong in the translations Those are redirections for old links which don t exist for new pages Note translation tools can help with the process This includes machine generated translations Machine generated translations should be edited or otherwise reviewing for grammar and meaning by a native language speaker before publishing Navigating Between Languages Screen Shot 2020 05 11 at 11 24 22 AM https user images githubusercontent com 686194 81597103 035de600 937a 11ea 9834 cd9dcef4e914 png The site global config toml https github com helm helm www blob main config toml L83L89 file is where language navigation is configured To add a new language add a new set of parameters using the two letter language code localization two letter language code defined above Example Korean languages ko title Helm description Helm The Kubernetes Package Manager contentDir content ko languageName Korean weight 1 Resolving Internal Links Translated content will sometimes include links to pages that only exist in another language This will result in site build errors https app netlify com sites helm merge deploys Example 12 45 31 PM htmltest started at 12 45 30 on app 12 45 31 PM 12 45 31 PM ko docs chart template guide accessing files index html 12 45 31 PM hash does not exist ko docs chart template guide accessing files index html basic example 12 45 31 PM failed in 197 566561ms 12 45 31 PM 1 error in 212 documents To resolve this you need to check your content for internal links anchor links need to reflect the translated id value internal page links need to be fixed For internal pages that do not exist or have not been translated yet the site will not build until a correction is made As a fallback the url can point to another language where that content does exist as follows relref path docs topics library charts md lang en See the Hugo Docs on cross references between languages https gohugo io content management cross references link to another language version for more info |
helm A Maintainer s Guide to Releasing Helm weight 2 the best person to update this Time for a new Helm release As a Helm maintainer cutting a release you are title Release Checklist Checklist for maintainers when releasing the next version of Helm | ---
title: "Release Checklist"
description: "Checklist for maintainers when releasing the next version of Helm."
weight: 2
---
# A Maintainer's Guide to Releasing Helm
Time for a new Helm release! As a Helm maintainer cutting a release, you are
the best person to [update this
release checklist](https://github.com/helm/helm-www/blob/main/content/en/docs/community/release_checklist.md)
should your experiences vary from what's documented here.
All releases will be of the form vX.Y.Z where X is the major version number, Y
is the minor version number and Z is the patch release number. This project
strictly follows [semantic versioning](https://semver.org/) so following this
step is critical.
Helm announces in advance the date of its next minor release. Every effort
should be made to respect the announced date. Furthermore, when starting
the release process, the date for the next release should have been selected
as it will be used in the release process.
These directions will cover initial configuration followed by the release
process for three different kinds of releases:
* Major Releases - released less frequently - have breaking changes
* Minor Releases - released every 3 to 4 months - no breaking changes
* Patch Releases - released monthly - do not require all steps in this guide
[Initial Configuration](#initial-configuration)
1. [Create the Release Branch](#1-create-the-release-branch)
2. [Major/Minor releases: Change the Version Number in Git](#2-majorminor-releases-change-the-version-number-in-git)
3. [Major/Minor releases: Commit and Push the Release Branch](#3-majorminor-releases-commit-and-push-the-release-branch)
4. [Major/Minor releases: Create a Release Candidate](#4-majorminor-releases-create-a-release-candidate)
5. [Major/Minor releases: Iterate on Successive Release Candidates](#5-majorminor-releases-iterate-on-successive-release-candidates)
6. [Finalize the Release](#6-finalize-the-release)
7. [Write the Release Notes](#7-write-the-release-notes)
8. [PGP Sign the downloads](#8-pgp-sign-the-downloads)
9. [Publish Release](#9-publish-release)
10. [Update Docs](#10-update-docs)
11. [Tell the Community](#11-tell-the-community)
## Initial Configuration
### Set Up Git Remote
It is important to note that this document assumes that the git remote in your
repository that corresponds to <https://github.com/helm/helm> is named
"upstream". If yours is not (for example, if you've chosen to name it "origin"
or something similar instead), be sure to adjust the listed snippets for your
local environment accordingly. If you are not sure what your upstream remote is
named, use a command like `git remote -v` to find out.
If you don't have an [upstream
remote](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/configuring-a-remote-for-a-fork)
, you can add one using something like:
```shell
git remote add upstream [email protected]:helm/helm.git
```
### Set Up Environment Variables
In this doc, we are going to reference a few environment variables as well,
which you may want to set for convenience. For major/minor releases, use the
following:
```shell
export RELEASE_NAME=vX.Y.0
export RELEASE_BRANCH_NAME="release-X.Y"
export RELEASE_CANDIDATE_NAME="$RELEASE_NAME-rc.1"
```
If you are creating a patch release, use the following instead:
```shell
export PREVIOUS_PATCH_RELEASE=vX.Y.Z
export RELEASE_NAME=vX.Y.Z+1
export RELEASE_BRANCH_NAME="release-X.Y"
```
### Set Up Signing Key
We are also going to be adding security and verification of the release process
by hashing the binaries and providing signature files. We perform this using
[GitHub and
GPG](https://help.github.com/en/articles/about-commit-signature-verification).
If you do not have GPG already setup you can follow these steps:
1. [Install GPG](https://gnupg.org/index.html)
2. [Generate GPG
key](https://help.github.com/en/articles/generating-a-new-gpg-key)
3. [Add key to GitHub
account](https://help.github.com/en/articles/adding-a-new-gpg-key-to-your-github-account)
4. [Set signing key in
Git](https://help.github.com/en/articles/telling-git-about-your-signing-key)
Once you have a signing key you need to add it to the KEYS file at the root of
the repository. The instructions for adding it to the KEYS file are in the file.
If you have not done so already, you need to add your public key to the
keyserver network. If you use GnuPG you can follow the [instructions provided by
Debian](https://debian-administration.org/article/451/Submitting_your_GPG_key_to_a_keyserver).
## 1. Create the Release Branch
### Major/Minor Releases
Major releases are for new feature additions and behavioral changes *that break
backwards compatibility*. Minor releases are for new feature additions that do
not break backwards compatibility. To create a major or minor release, start by
creating a `release-X.Y` branch from main.
```shell
git fetch upstream
git checkout upstream/main
git checkout -b $RELEASE_BRANCH_NAME
```
This new branch is going to be the base for the release, which we are going to
iterate upon later.
Verify that a [helm/helm milestone](https://github.com/helm/helm/milestones)
for the release exists on GitHub (creating it if necessary). Make sure PRs and
issues for this release are in this milestone.
For major & minor releases, move on to step 2: [Major/Minor releases: Change
the Version Number in Git](#2-majorminor-releases-change-the-version-number-in-git).
### Patch releases
Patch releases are a few critical cherry-picked fixes to existing releases.
Start by creating a `release-X.Y` branch:
```shell
git fetch upstream
git checkout -b $RELEASE_BRANCH_NAME upstream/$RELEASE_BRANCH_NAME
```
From here, we can cherry-pick the commits we want to bring into the patch
release:
```shell
# get the commits ids we want to cherry-pick
git log --oneline
# cherry-pick the commits starting from the oldest one, without including merge commits
git cherry-pick -x <commit-id>
```
After the commits have been cherry picked the release branch needs to be pushed.
```shell
git push upstream $RELEASE_BRANCH_NAME
```
Pushing the branch will cause the tests to run. Make sure they pass prior to
creating the tag. This new tag is going to be the base for the patch release.
Creating a [helm/helm
milestone](https://github.com/helm/helm/milestones) is optional for patch
releases.
Make sure to check [GitHub Actions](https://github.com/helm/helm/actions) to see
that the release passed CI before proceeding. Patch releases can skip steps 2-5
and proceed to step 6 to [Finalize the Release](#6-finalize-the-release).
## 2. Major/Minor releases: Change the Version Number in Git
When doing a major or minor release, make sure to update
`internal/version/version.go` with the new release version.
```shell
$ git diff internal/version/version.go
diff --git a/internal/version/version.go b/internal/version/version.go
index 712aae64..c1ed191e 100644
--- a/internal/version/version.go
+++ b/internal/version/version.go
@@ -30,7 +30,7 @@ var (
// Increment major number for new feature additions and behavioral changes.
// Increment minor number for bug fixes and performance enhancements.
// Increment patch number for critical fixes to existing releases.
- version = "v3.3"
+ version = "v3.4"
// metadata is extra build time data
metadata = ""
```
In addition to updating the version within the `version.go` file, you will also
need to update corresponding tests that are using that version number.
* `cmd/helm/testdata/output/version.txt`
* `cmd/helm/testdata/output/version-client.txt`
* `cmd/helm/testdata/output/version-client-shorthand.txt`
* `cmd/helm/testdata/output/version-short.txt`
* `cmd/helm/testdata/output/version-template.txt`
* `pkg/chartutil/capabilities_test.go`
```shell
git add .
git commit -m "bump version to $RELEASE_NAME"
```
This will update it for the $RELEASE_BRANCH_NAME only. You will also need to
pull this change into the main branch for when the next release is being
created, as in [this example of 3.2 to
3.3](https://github.com/helm/helm/pull/8411/files), and add it to the milestone
for the next release.
```shell
# get the last commit id i.e. commit to bump the version
git log --format="%H" -n 1
# create new branch off main
git checkout main
git checkout -b bump-version-<release_version>
# cherry pick the commit using id from first command
git cherry-pick -x <commit-id>
# commit the change
git push origin bump-version-<release-version>
```
## 3. Major/Minor releases: Commit and Push the Release Branch
In order for others to start testing, we can now push the release branch
upstream and start the test process.
```shell
git push upstream $RELEASE_BRANCH_NAME
```
Make sure to check [GitHub Actions](https://github.com/helm/helm/actions) to see
that the release passed CI before proceeding.
If anyone is available, let others peer-review the branch before continuing to
ensure that all the proper changes have been made and all of the commits for the
release are there.
## 4. Major/Minor releases: Create a Release Candidate
Now that the release branch is out and ready, it is time to start creating and
iterating on release candidates.
```shell
git tag --sign --annotate "${RELEASE_CANDIDATE_NAME}" --message "Helm release ${RELEASE_CANDIDATE_NAME}"
git push upstream $RELEASE_CANDIDATE_NAME
```
GitHub Actions will automatically create a tagged release image and client binary to
test with.
For testers, the process to start testing after GitHub Actions finishes building the
artifacts involves the following steps to grab the client:
linux/amd64, using /bin/bash:
```shell
wget https://get.helm.sh/helm-$RELEASE_CANDIDATE_NAME-linux-amd64.tar.gz
```
darwin/amd64, using Terminal.app:
```shell
wget https://get.helm.sh/helm-$RELEASE_CANDIDATE_NAME-darwin-amd64.tar.gz
```
windows/amd64, using PowerShell:
```shell
PS C:\> Invoke-WebRequest -Uri "https://get.helm.sh/helm-$RELEASE_CANDIDATE_NAME-windows-amd64.tar.gz" -OutFile "helm-$ReleaseCandidateName-windows-amd64.tar.gz"
```
Then, unpack and move the binary to somewhere on your $PATH, or move it
somewhere and add it to your $PATH (e.g. /usr/local/bin/helm for linux/macOS,
C:\Program Files\helm\helm.exe for Windows).
## 5. Major/Minor releases: Iterate on Successive Release Candidates
Spend several days explicitly investing time and resources to try and break helm
in every possible way, documenting any findings pertinent to the release. This
time should be spent testing and finding ways in which the release might have
caused various features or upgrade environments to have issues, not coding.
During this time, the release is in code freeze, and any additional code changes
will be pushed out to the next release.
During this phase, the $RELEASE_BRANCH_NAME branch will keep evolving as you
will produce new release candidates. The frequency of new candidates is up to
the release manager: use your best judgement taking into account the severity of
reported issues, testers' availability, and the release deadline date. Generally
speaking, it is better to let a release roll over the deadline than to ship a
broken release.
Each time you'll want to produce a new release candidate, you will start by
adding commits to the branch by cherry-picking from main:
```shell
git cherry-pick -x <commit_id>
```
You will also want to push the branch to GitHub and ensure it passes CI.
After that, tag it and notify users of the new release candidate:
```shell
export RELEASE_CANDIDATE_NAME="$RELEASE_NAME-rc.2"
git tag --sign --annotate "${RELEASE_CANDIDATE_NAME}" --message "Helm release ${RELEASE_CANDIDATE_NAME}"
git push upstream $RELEASE_CANDIDATE_NAME
```
Once pushed to GitHub, check to ensure the branch with this tag builds in CI.
From here on just repeat this process, continuously testing until you're happy
with the release candidate. For a release candidate, we don't write the full notes,
but you can scaffold out some [release notes](#7-write-the-release-notes).
## 6. Finalize the Release
When you're finally happy with the quality of a release candidate, you can move
on and create the real thing. Double-check one last time to make sure everything
is in order, then finally push the release tag.
```shell
git checkout $RELEASE_BRANCH_NAME
git tag --sign --annotate "${RELEASE_NAME}" --message "Helm release ${RELEASE_NAME}"
git push upstream $RELEASE_NAME
```
Verify that the release succeeded in
[GitHub Actions](https://github.com/helm/helm/actions). If not, you will need to fix the
release and push the release again.
As the CI job will take some time to run, you can move on to writing release
notes while you wait for it to complete.
## 7. Write the Release Notes
We will auto-generate a changelog based on the commits that occurred during a
release cycle, but it is usually more beneficial to the end-user if the release
notes are hand-written by a human being/marketing team/dog.
If you're releasing a major/minor release, listing notable user-facing features
is usually sufficient. For patch releases, do the same, but make note of the
symptoms and who is affected.
The release notes should include the version and planned date of the next release.
An example release note for a minor release would look like this:
```markdown
## vX.Y.Z
Helm vX.Y.Z is a feature release. This release, we focused on <insert focal point>. Users are encouraged to upgrade for the best experience.
The community keeps growing, and we'd love to see you there!
- Join the discussion in [Kubernetes Slack](https://kubernetes.slack.com):
- `#helm-users` for questions and just to hang out
- `#helm-dev` for discussing PRs, code, and bugs
- Hang out at the Public Developer Call: Thursday, 9:30 Pacific via [Zoom](https://zoom.us/j/696660622)
- Test, debug, and contribute charts: [Artifact Hub helm charts](https://artifacthub.io/packages/search?kind=0)
## Notable Changes
- Kubernetes 1.16 is now supported including new manifest apiVersions
- Sprig was upgraded to 2.22
## Installation and Upgrading
Download Helm X.Y. The common platform binaries are here:
- [MacOS amd64](https://get.helm.sh/helm-vX.Y.Z-darwin-amd64.tar.gz) ([checksum](https://get.helm.sh/helm-vX.Y.Z-darwin-amd64.tar.gz.sha256sum) / CHECKSUM_VAL)
- [Linux amd64](https://get.helm.sh/helm-vX.Y.Z-linux-amd64.tar.gz) ([checksum](https://get.helm.sh/helm-vX.Y.Z-linux-amd64.tar.gz.sha256sum) / CHECKSUM_VAL)
- [Linux arm](https://get.helm.sh/helm-vX.Y.Z-linux-arm.tar.gz) ([checksum](https://get.helm.sh/helm-vX.Y.Z-linux-arm.tar.gz.sha256) / CHECKSUM_VAL)
- [Linux arm64](https://get.helm.sh/helm-vX.Y.Z-linux-arm64.tar.gz) ([checksum](https://get.helm.sh/helm-vX.Y.Z-linux-arm64.tar.gz.sha256sum) / CHECKSUM_VAL)
- [Linux i386](https://get.helm.sh/helm-vX.Y.Z-linux-386.tar.gz) ([checksum](https://get.helm.sh/helm-vX.Y.Z-linux-386.tar.gz.sha256) / CHECKSUM_VAL)
- [Linux ppc64le](https://get.helm.sh/helm-vX.Y.Z-linux-ppc64le.tar.gz) ([checksum](https://get.helm.sh/helm-vX.Y.Z-linux-ppc64le.tar.gz.sha256sum) / CHECKSUM_VAL)
- [Linux s390x](https://get.helm.sh/helm-vX.Y.Z-linux-s390x.tar.gz) ([checksum](https://get.helm.sh/helm-vX.Y.Z-linux-s390x.tar.gz.sha256sum) / CHECKSUM_VAL)
- [Windows amd64](https://get.helm.sh/helm-vX.Y.Z-windows-amd64.zip) ([checksum](https://get.helm.sh/helm-vX.Y.Z-windows-amd64.zip.sha256sum) / CHECKSUM_VAL)
The [Quickstart Guide](https://docs.helm.sh/using_helm/#quickstart-guide) will get you going from there. For **upgrade instructions** or detailed installation notes, check the [install guide](https://docs.helm.sh/using_helm/#installing-helm). You can also use a [script to install](https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3) on any system with `bash`.
## What's Next
- vX.Y.Z+1 will contain only bug fixes and is planned for <insert DATE>.
- vX.Y+1.0 is the next feature release and is planned for <insert DATE>. This release will focus on ...
## Changelog
- chore(*): bump version to v2.7.0 08c1144f5eb3e3b636d9775617287cc26e53dba4 (Adam Reese)
- fix circle not building tags f4f932fabd197f7e6d608c8672b33a483b4b76fa (Matthew Fisher)
```
A partially completed set of release notes including the changelog can be
created by running the following command:
```shell
export VERSION="$RELEASE_NAME"
export PREVIOUS_RELEASE=vX.Y.Z
make clean
make fetch-dist
make release-notes
```
This will create a good baseline set of release notes to which you should just
need to fill out the **Notable Changes** and **What's next** sections.
Feel free to add your voice to the release notes; it's nice for people to think
we're not all robots.
You should also double check the URLs and checksums are correct in the
auto-generated release notes.
Once finished, go into GitHub to [helm/helm
releases](https://github.com/helm/helm/releases) and edit the release notes for
the tagged release with the notes written here.
For target branch, set to $RELEASE_BRANCH_NAME.
It is now worth getting other people to take a look at the release notes before
the release is published. Send a request out to
[#helm-dev](https://kubernetes.slack.com/messages/C51E88VDG) for review. It is
always beneficial as it can be easy to miss something.
## 8. PGP Sign the downloads
While hashes provide a signature that the content of the downloads is what it
was generated, signed packages provide traceability of where the package came
from.
To do this, run the following `make` commands:
```shell
export VERSION="$RELEASE_NAME"
make clean # if not already run
make fetch-dist # if not already run
make sign
```
This will generate ascii armored signature files for each of the files pushed by
CI.
All of the signature files (`*.asc`) need to be uploaded to the release on
GitHub (attach binaries).
## 9. Publish Release
Time to make the release official!
After the release notes are saved on GitHub, the CI build is completed, and
you've added the signature files to the release, you can hit "Publish" on
the release. This publishes the release, listing it as "latest", and shows this
release on the front page of the [helm/helm](https://github.com/helm/helm) repo.
## 10. Update Docs
The [Helm website docs section](https://helm.sh/docs) lists the Helm versions
for the docs. Major, minor, and patch versions need to be updated on the site.
The date for the next minor release is also published on the site and must be
updated.
To do that create a pull request against the [helm-www
repository](https://github.com/helm/helm-www). In the `config.toml` file find
the proper `params.versions` section and update the Helm version, like in this
example of [updating the current
version](https://github.com/helm/helm-www/pull/676/files). In the same
`config.toml` file, update the `params.nextversion` section.
Close the [helm/helm milestone](https://github.com/helm/helm/milestones) for
the release, if applicable.
Update the [version
skew](https://github.com/helm/helm-www/blob/main/content/en/docs/topics/version_skew.md)
for major and minor releases.
Update the release calendar [here](https://helm.sh/calendar/release):
* create an entry for the next minor release with a reminder for that day at 5pm GMT
* create an entry for the RC1 of the next minor release on the Monday of the week before the planned release, with a reminder for that day at 5pm GMT
## 11. Tell the Community
Congratulations! You're done. Go grab yourself a $DRINK_OF_CHOICE. You've earned
it.
After enjoying a nice $DRINK_OF_CHOICE, go forth and announce the new release
in Slack and on Twitter with a link to the [release on
GitHub](https://github.com/helm/helm/releases).
Optionally, write a blog post about the new release and showcase some of the new
features on there! | helm | title Release Checklist description Checklist for maintainers when releasing the next version of Helm weight 2 A Maintainer s Guide to Releasing Helm Time for a new Helm release As a Helm maintainer cutting a release you are the best person to update this release checklist https github com helm helm www blob main content en docs community release checklist md should your experiences vary from what s documented here All releases will be of the form vX Y Z where X is the major version number Y is the minor version number and Z is the patch release number This project strictly follows semantic versioning https semver org so following this step is critical Helm announces in advance the date of its next minor release Every effort should be made to respect the announced date Furthermore when starting the release process the date for the next release should have been selected as it will be used in the release process These directions will cover initial configuration followed by the release process for three different kinds of releases Major Releases released less frequently have breaking changes Minor Releases released every 3 to 4 months no breaking changes Patch Releases released monthly do not require all steps in this guide Initial Configuration initial configuration 1 Create the Release Branch 1 create the release branch 2 Major Minor releases Change the Version Number in Git 2 majorminor releases change the version number in git 3 Major Minor releases Commit and Push the Release Branch 3 majorminor releases commit and push the release branch 4 Major Minor releases Create a Release Candidate 4 majorminor releases create a release candidate 5 Major Minor releases Iterate on Successive Release Candidates 5 majorminor releases iterate on successive release candidates 6 Finalize the Release 6 finalize the release 7 Write the Release Notes 7 write the release notes 8 PGP Sign the downloads 8 pgp sign the downloads 9 Publish Release 9 publish release 10 Update Docs 10 update docs 11 Tell the Community 11 tell the community Initial Configuration Set Up Git Remote It is important to note that this document assumes that the git remote in your repository that corresponds to https github com helm helm is named upstream If yours is not for example if you ve chosen to name it origin or something similar instead be sure to adjust the listed snippets for your local environment accordingly If you are not sure what your upstream remote is named use a command like git remote v to find out If you don t have an upstream remote https docs github com en github collaborating with issues and pull requests configuring a remote for a fork you can add one using something like shell git remote add upstream git github com helm helm git Set Up Environment Variables In this doc we are going to reference a few environment variables as well which you may want to set for convenience For major minor releases use the following shell export RELEASE NAME vX Y 0 export RELEASE BRANCH NAME release X Y export RELEASE CANDIDATE NAME RELEASE NAME rc 1 If you are creating a patch release use the following instead shell export PREVIOUS PATCH RELEASE vX Y Z export RELEASE NAME vX Y Z 1 export RELEASE BRANCH NAME release X Y Set Up Signing Key We are also going to be adding security and verification of the release process by hashing the binaries and providing signature files We perform this using GitHub and GPG https help github com en articles about commit signature verification If you do not have GPG already setup you can follow these steps 1 Install GPG https gnupg org index html 2 Generate GPG key https help github com en articles generating a new gpg key 3 Add key to GitHub account https help github com en articles adding a new gpg key to your github account 4 Set signing key in Git https help github com en articles telling git about your signing key Once you have a signing key you need to add it to the KEYS file at the root of the repository The instructions for adding it to the KEYS file are in the file If you have not done so already you need to add your public key to the keyserver network If you use GnuPG you can follow the instructions provided by Debian https debian administration org article 451 Submitting your GPG key to a keyserver 1 Create the Release Branch Major Minor Releases Major releases are for new feature additions and behavioral changes that break backwards compatibility Minor releases are for new feature additions that do not break backwards compatibility To create a major or minor release start by creating a release X Y branch from main shell git fetch upstream git checkout upstream main git checkout b RELEASE BRANCH NAME This new branch is going to be the base for the release which we are going to iterate upon later Verify that a helm helm milestone https github com helm helm milestones for the release exists on GitHub creating it if necessary Make sure PRs and issues for this release are in this milestone For major minor releases move on to step 2 Major Minor releases Change the Version Number in Git 2 majorminor releases change the version number in git Patch releases Patch releases are a few critical cherry picked fixes to existing releases Start by creating a release X Y branch shell git fetch upstream git checkout b RELEASE BRANCH NAME upstream RELEASE BRANCH NAME From here we can cherry pick the commits we want to bring into the patch release shell get the commits ids we want to cherry pick git log oneline cherry pick the commits starting from the oldest one without including merge commits git cherry pick x commit id After the commits have been cherry picked the release branch needs to be pushed shell git push upstream RELEASE BRANCH NAME Pushing the branch will cause the tests to run Make sure they pass prior to creating the tag This new tag is going to be the base for the patch release Creating a helm helm milestone https github com helm helm milestones is optional for patch releases Make sure to check GitHub Actions https github com helm helm actions to see that the release passed CI before proceeding Patch releases can skip steps 2 5 and proceed to step 6 to Finalize the Release 6 finalize the release 2 Major Minor releases Change the Version Number in Git When doing a major or minor release make sure to update internal version version go with the new release version shell git diff internal version version go diff git a internal version version go b internal version version go index 712aae64 c1ed191e 100644 a internal version version go b internal version version go 30 7 30 7 var Increment major number for new feature additions and behavioral changes Increment minor number for bug fixes and performance enhancements Increment patch number for critical fixes to existing releases version v3 3 version v3 4 metadata is extra build time data metadata In addition to updating the version within the version go file you will also need to update corresponding tests that are using that version number cmd helm testdata output version txt cmd helm testdata output version client txt cmd helm testdata output version client shorthand txt cmd helm testdata output version short txt cmd helm testdata output version template txt pkg chartutil capabilities test go shell git add git commit m bump version to RELEASE NAME This will update it for the RELEASE BRANCH NAME only You will also need to pull this change into the main branch for when the next release is being created as in this example of 3 2 to 3 3 https github com helm helm pull 8411 files and add it to the milestone for the next release shell get the last commit id i e commit to bump the version git log format H n 1 create new branch off main git checkout main git checkout b bump version release version cherry pick the commit using id from first command git cherry pick x commit id commit the change git push origin bump version release version 3 Major Minor releases Commit and Push the Release Branch In order for others to start testing we can now push the release branch upstream and start the test process shell git push upstream RELEASE BRANCH NAME Make sure to check GitHub Actions https github com helm helm actions to see that the release passed CI before proceeding If anyone is available let others peer review the branch before continuing to ensure that all the proper changes have been made and all of the commits for the release are there 4 Major Minor releases Create a Release Candidate Now that the release branch is out and ready it is time to start creating and iterating on release candidates shell git tag sign annotate RELEASE CANDIDATE NAME message Helm release RELEASE CANDIDATE NAME git push upstream RELEASE CANDIDATE NAME GitHub Actions will automatically create a tagged release image and client binary to test with For testers the process to start testing after GitHub Actions finishes building the artifacts involves the following steps to grab the client linux amd64 using bin bash shell wget https get helm sh helm RELEASE CANDIDATE NAME linux amd64 tar gz darwin amd64 using Terminal app shell wget https get helm sh helm RELEASE CANDIDATE NAME darwin amd64 tar gz windows amd64 using PowerShell shell PS C Invoke WebRequest Uri https get helm sh helm RELEASE CANDIDATE NAME windows amd64 tar gz OutFile helm ReleaseCandidateName windows amd64 tar gz Then unpack and move the binary to somewhere on your PATH or move it somewhere and add it to your PATH e g usr local bin helm for linux macOS C Program Files helm helm exe for Windows 5 Major Minor releases Iterate on Successive Release Candidates Spend several days explicitly investing time and resources to try and break helm in every possible way documenting any findings pertinent to the release This time should be spent testing and finding ways in which the release might have caused various features or upgrade environments to have issues not coding During this time the release is in code freeze and any additional code changes will be pushed out to the next release During this phase the RELEASE BRANCH NAME branch will keep evolving as you will produce new release candidates The frequency of new candidates is up to the release manager use your best judgement taking into account the severity of reported issues testers availability and the release deadline date Generally speaking it is better to let a release roll over the deadline than to ship a broken release Each time you ll want to produce a new release candidate you will start by adding commits to the branch by cherry picking from main shell git cherry pick x commit id You will also want to push the branch to GitHub and ensure it passes CI After that tag it and notify users of the new release candidate shell export RELEASE CANDIDATE NAME RELEASE NAME rc 2 git tag sign annotate RELEASE CANDIDATE NAME message Helm release RELEASE CANDIDATE NAME git push upstream RELEASE CANDIDATE NAME Once pushed to GitHub check to ensure the branch with this tag builds in CI From here on just repeat this process continuously testing until you re happy with the release candidate For a release candidate we don t write the full notes but you can scaffold out some release notes 7 write the release notes 6 Finalize the Release When you re finally happy with the quality of a release candidate you can move on and create the real thing Double check one last time to make sure everything is in order then finally push the release tag shell git checkout RELEASE BRANCH NAME git tag sign annotate RELEASE NAME message Helm release RELEASE NAME git push upstream RELEASE NAME Verify that the release succeeded in GitHub Actions https github com helm helm actions If not you will need to fix the release and push the release again As the CI job will take some time to run you can move on to writing release notes while you wait for it to complete 7 Write the Release Notes We will auto generate a changelog based on the commits that occurred during a release cycle but it is usually more beneficial to the end user if the release notes are hand written by a human being marketing team dog If you re releasing a major minor release listing notable user facing features is usually sufficient For patch releases do the same but make note of the symptoms and who is affected The release notes should include the version and planned date of the next release An example release note for a minor release would look like this markdown vX Y Z Helm vX Y Z is a feature release This release we focused on insert focal point Users are encouraged to upgrade for the best experience The community keeps growing and we d love to see you there Join the discussion in Kubernetes Slack https kubernetes slack com helm users for questions and just to hang out helm dev for discussing PRs code and bugs Hang out at the Public Developer Call Thursday 9 30 Pacific via Zoom https zoom us j 696660622 Test debug and contribute charts Artifact Hub helm charts https artifacthub io packages search kind 0 Notable Changes Kubernetes 1 16 is now supported including new manifest apiVersions Sprig was upgraded to 2 22 Installation and Upgrading Download Helm X Y The common platform binaries are here MacOS amd64 https get helm sh helm vX Y Z darwin amd64 tar gz checksum https get helm sh helm vX Y Z darwin amd64 tar gz sha256sum CHECKSUM VAL Linux amd64 https get helm sh helm vX Y Z linux amd64 tar gz checksum https get helm sh helm vX Y Z linux amd64 tar gz sha256sum CHECKSUM VAL Linux arm https get helm sh helm vX Y Z linux arm tar gz checksum https get helm sh helm vX Y Z linux arm tar gz sha256 CHECKSUM VAL Linux arm64 https get helm sh helm vX Y Z linux arm64 tar gz checksum https get helm sh helm vX Y Z linux arm64 tar gz sha256sum CHECKSUM VAL Linux i386 https get helm sh helm vX Y Z linux 386 tar gz checksum https get helm sh helm vX Y Z linux 386 tar gz sha256 CHECKSUM VAL Linux ppc64le https get helm sh helm vX Y Z linux ppc64le tar gz checksum https get helm sh helm vX Y Z linux ppc64le tar gz sha256sum CHECKSUM VAL Linux s390x https get helm sh helm vX Y Z linux s390x tar gz checksum https get helm sh helm vX Y Z linux s390x tar gz sha256sum CHECKSUM VAL Windows amd64 https get helm sh helm vX Y Z windows amd64 zip checksum https get helm sh helm vX Y Z windows amd64 zip sha256sum CHECKSUM VAL The Quickstart Guide https docs helm sh using helm quickstart guide will get you going from there For upgrade instructions or detailed installation notes check the install guide https docs helm sh using helm installing helm You can also use a script to install https raw githubusercontent com helm helm main scripts get helm 3 on any system with bash What s Next vX Y Z 1 will contain only bug fixes and is planned for insert DATE vX Y 1 0 is the next feature release and is planned for insert DATE This release will focus on Changelog chore bump version to v2 7 0 08c1144f5eb3e3b636d9775617287cc26e53dba4 Adam Reese fix circle not building tags f4f932fabd197f7e6d608c8672b33a483b4b76fa Matthew Fisher A partially completed set of release notes including the changelog can be created by running the following command shell export VERSION RELEASE NAME export PREVIOUS RELEASE vX Y Z make clean make fetch dist make release notes This will create a good baseline set of release notes to which you should just need to fill out the Notable Changes and What s next sections Feel free to add your voice to the release notes it s nice for people to think we re not all robots You should also double check the URLs and checksums are correct in the auto generated release notes Once finished go into GitHub to helm helm releases https github com helm helm releases and edit the release notes for the tagged release with the notes written here For target branch set to RELEASE BRANCH NAME It is now worth getting other people to take a look at the release notes before the release is published Send a request out to helm dev https kubernetes slack com messages C51E88VDG for review It is always beneficial as it can be easy to miss something 8 PGP Sign the downloads While hashes provide a signature that the content of the downloads is what it was generated signed packages provide traceability of where the package came from To do this run the following make commands shell export VERSION RELEASE NAME make clean if not already run make fetch dist if not already run make sign This will generate ascii armored signature files for each of the files pushed by CI All of the signature files asc need to be uploaded to the release on GitHub attach binaries 9 Publish Release Time to make the release official After the release notes are saved on GitHub the CI build is completed and you ve added the signature files to the release you can hit Publish on the release This publishes the release listing it as latest and shows this release on the front page of the helm helm https github com helm helm repo 10 Update Docs The Helm website docs section https helm sh docs lists the Helm versions for the docs Major minor and patch versions need to be updated on the site The date for the next minor release is also published on the site and must be updated To do that create a pull request against the helm www repository https github com helm helm www In the config toml file find the proper params versions section and update the Helm version like in this example of updating the current version https github com helm helm www pull 676 files In the same config toml file update the params nextversion section Close the helm helm milestone https github com helm helm milestones for the release if applicable Update the version skew https github com helm helm www blob main content en docs topics version skew md for major and minor releases Update the release calendar here https helm sh calendar release create an entry for the next minor release with a reminder for that day at 5pm GMT create an entry for the RC1 of the next minor release on the Monday of the week before the planned release with a reminder for that day at 5pm GMT 11 Tell the Community Congratulations You re done Go grab yourself a DRINK OF CHOICE You ve earned it After enjoying a nice DRINK OF CHOICE go forth and announce the new release in Slack and on Twitter with a link to the release on GitHub https github com helm helm releases Optionally write a blog post about the new release and showcase some of the new features on there |
helm This guide explains how to set up your environment for developing on Helm aliases docs developers title Developer Guide Prerequisites weight 1 Instructions for setting up your environment for developing Helm | ---
title: "Developer Guide"
description: "Instructions for setting up your environment for developing Helm."
weight: 1
aliases: ["/docs/developers/"]
---
This guide explains how to set up your environment for developing on Helm.
## Prerequisites
- The latest version of Go
- A Kubernetes cluster w/ kubectl (optional)
- Git
## Building Helm
We use Make to build our programs. The simplest way to get started is:
```console
$ make
```
If required, this will first install dependencies and validate configuration. It will then compile `helm` and place it in
`bin/helm`.
To run Helm locally, you can run `bin/helm`.
- Helm is known to run on macOS and most Linux distributions, including Alpine.
## Running tests
To run all the tests, run `make test`.
As a pre-requisite, you would need to have
[golangci-lint](https://golangci-lint.run)
installed.
## Running Locally
You can update your path and add the path of your local helm binary. In an editor
open your shell config file. Add the following line making sure you replace
`<path to your binary folder>` with your local bin directory.
``` bash
export PATH="<path to your binary folder>:$PATH"
```
This will allow you to run the locally built version of helm from your terminal.
## Contribution Guidelines
We welcome contributions. This project has set up some guidelines in order to
ensure that (a) code quality remains high, (b) the project remains consistent,
and (c) contributions follow the open source legal requirements. Our intent is
not to burden contributors, but to build elegant and high-quality open source
code so that our users will benefit.
Make sure you have read and understood the main CONTRIBUTING guide:
<https://github.com/helm/helm/blob/main/CONTRIBUTING.md>
### Structure of the Code
The code for the Helm project is organized as follows:
- The individual programs are located in `cmd/`. Code inside of `cmd/` is not
designed for library re-use.
- Shared libraries are stored in `pkg/`.
- The `scripts/` directory contains a number of utility scripts. Most of these
are used by the CI/CD pipeline.
Go dependency management is in flux, and it is likely to change during the
course of Helm's lifecycle. We encourage developers to _not_ try to manually
manage dependencies. Instead, we suggest relying upon the project's `Makefile`
to do that for you. With Helm 3, it is recommended that you are on Go version
1.13 or later.
### Writing Documentation
Since Helm 3, documentation has been moved to its own repository. When writing
new features, please write accompanying documentation and submit it to the
[helm-www](https://github.com/helm/helm-www) repository.
One exception: [Helm CLI output (in English)](https://helm.sh/docs/helm/) is
generated from the `helm` binary itself. See [Updating the Helm CLI Reference Docs](https://github.com/helm/helm-www#updating-the-helm-cli-reference-docs)
for instructions on how to generate this output. When translated, the CLI
output is not generated and can be found in `/content/<lang>/docs/helm`.
### Git Conventions
We use Git for our version control system. The `main` branch is the home of
the current development candidate. Releases are tagged.
We accept changes to the code via GitHub Pull Requests (PRs). One workflow for
doing this is as follows:
1. Fork the `github.com/helm/helm` repository into your GitHub account
2. `git clone` the forked repository into your desired directory
3. Create a new working branch (`git checkout -b feat/my-feature`) and do your
work on that branch.
4. When you are ready for us to review, push your branch to GitHub, and then
open a new pull request with us.
For Git commit messages, we follow the [Semantic Commit
Messages](https://karma-runner.github.io/0.13/dev/git-commit-msg.html):
```
fix(helm): add --foo flag to 'helm install'
When 'helm install --foo bar' is run, this will print "foo" in the
output regardless of the outcome of the installation.
Closes #1234
```
Common commit types:
- fix: Fix a bug or error
- feat: Add a new feature
- docs: Change documentation
- test: Improve testing
- ref: refactor existing code
Common scopes:
- helm: The Helm CLI
- pkg/lint: The lint package. Follow a similar convention for any package
- `*`: two or more scopes
Read more:
- The [Deis
Guidelines](https://github.com/deis/workflow/blob/master/src/contributing/submitting-a-pull-request.md)
were the inspiration for this section.
- Karma Runner
[defines](https://karma-runner.github.io/0.13/dev/git-commit-msg.html) the
semantic commit message idea.
### Go Conventions
We follow the Go coding style standards very closely. Typically, running `go
fmt` will make your code beautiful for you.
We also typically follow the conventions recommended by `go lint` and
`gometalinter`. Run `make test-style` to test the style conformance.
Read more:
- Effective Go [introduces
formatting](https://golang.org/doc/effective_go.html#formatting).
- The Go Wiki has a great article on
[formatting](https://github.com/golang/go/wiki/CodeReviewComments).
If you run the `make test` target, not only will unit tests be run, but so will
style tests. If the `make test` target fails, even for stylistic reasons, your
PR will not be considered ready for merging. | helm | title Developer Guide description Instructions for setting up your environment for developing Helm weight 1 aliases docs developers This guide explains how to set up your environment for developing on Helm Prerequisites The latest version of Go A Kubernetes cluster w kubectl optional Git Building Helm We use Make to build our programs The simplest way to get started is console make If required this will first install dependencies and validate configuration It will then compile helm and place it in bin helm To run Helm locally you can run bin helm Helm is known to run on macOS and most Linux distributions including Alpine Running tests To run all the tests run make test As a pre requisite you would need to have golangci lint https golangci lint run installed Running Locally You can update your path and add the path of your local helm binary In an editor open your shell config file Add the following line making sure you replace path to your binary folder with your local bin directory bash export PATH path to your binary folder PATH This will allow you to run the locally built version of helm from your terminal Contribution Guidelines We welcome contributions This project has set up some guidelines in order to ensure that a code quality remains high b the project remains consistent and c contributions follow the open source legal requirements Our intent is not to burden contributors but to build elegant and high quality open source code so that our users will benefit Make sure you have read and understood the main CONTRIBUTING guide https github com helm helm blob main CONTRIBUTING md Structure of the Code The code for the Helm project is organized as follows The individual programs are located in cmd Code inside of cmd is not designed for library re use Shared libraries are stored in pkg The scripts directory contains a number of utility scripts Most of these are used by the CI CD pipeline Go dependency management is in flux and it is likely to change during the course of Helm s lifecycle We encourage developers to not try to manually manage dependencies Instead we suggest relying upon the project s Makefile to do that for you With Helm 3 it is recommended that you are on Go version 1 13 or later Writing Documentation Since Helm 3 documentation has been moved to its own repository When writing new features please write accompanying documentation and submit it to the helm www https github com helm helm www repository One exception Helm CLI output in English https helm sh docs helm is generated from the helm binary itself See Updating the Helm CLI Reference Docs https github com helm helm www updating the helm cli reference docs for instructions on how to generate this output When translated the CLI output is not generated and can be found in content lang docs helm Git Conventions We use Git for our version control system The main branch is the home of the current development candidate Releases are tagged We accept changes to the code via GitHub Pull Requests PRs One workflow for doing this is as follows 1 Fork the github com helm helm repository into your GitHub account 2 git clone the forked repository into your desired directory 3 Create a new working branch git checkout b feat my feature and do your work on that branch 4 When you are ready for us to review push your branch to GitHub and then open a new pull request with us For Git commit messages we follow the Semantic Commit Messages https karma runner github io 0 13 dev git commit msg html fix helm add foo flag to helm install When helm install foo bar is run this will print foo in the output regardless of the outcome of the installation Closes 1234 Common commit types fix Fix a bug or error feat Add a new feature docs Change documentation test Improve testing ref refactor existing code Common scopes helm The Helm CLI pkg lint The lint package Follow a similar convention for any package two or more scopes Read more The Deis Guidelines https github com deis workflow blob master src contributing submitting a pull request md were the inspiration for this section Karma Runner defines https karma runner github io 0 13 dev git commit msg html the semantic commit message idea Go Conventions We follow the Go coding style standards very closely Typically running go fmt will make your code beautiful for you We also typically follow the conventions recommended by go lint and gometalinter Run make test style to test the style conformance Read more Effective Go introduces formatting https golang org doc effective go html formatting The Go Wiki has a great article on formatting https github com golang go wiki CodeReviewComments If you run the make test target not only will unit tests be run but so will style tests If the make test target fails even for stylistic reasons your PR will not be considered ready for merging |
helm The Helm community has produced many extra tools plugins and documentation title Related Projects and Documentation weight 3 about Helm We love to hear about these projects aliases docs related third party tools plugins and documentation provided by the community | ---
title: "Related Projects and Documentation"
description: "third-party tools, plugins and documentation provided by the community!"
weight: 3
aliases: ["/docs/related/"]
---
The Helm community has produced many extra tools, plugins, and documentation
about Helm. We love to hear about these projects.
If you have anything you'd like to add to this list, please open an
[issue](https://github.com/helm/helm-www/issues) or [pull
request](https://github.com/helm/helm-www/pulls).
## Helm Plugins
- [helm-adopt](https://github.com/HamzaZo/helm-adopt) - A helm v3 plugin to adopt
existing k8s resources into a new generated helm chart.
- [helm-chartsnap](https://github.com/jlandowner/helm-chartsnap) - Snapshot testing plugin for Helm charts.
- [Helm Diff](https://github.com/databus23/helm-diff) - Preview `helm upgrade`
as a coloured diff
- [Helm Dt](https://github.com/vmware-labs/distribution-tooling-for-helm) - Plugin that helps distributing Helm charts across OCI registries and on Air gap environments
- [Helm Dashboard](https://github.com/komodorio/helm-dashboard) - GUI for Helm, visualize releases and repositories, manifest diffs
- [helm-gcs](https://github.com/hayorov/helm-gcs) - Plugin to manage repositories
on Google Cloud Storage
- [helm-git](https://github.com/aslafy-z/helm-git) - Install charts and retrieve
values files from your Git repositories
- [helm-k8comp](https://github.com/cststack/k8comp) - Plugin to create Helm
Charts from hiera using k8comp
- [helm-mapkubeapis](https://github.com/helm/helm-mapkubeapis) - Update helm release
metadata to replace deprecated or removed Kubernetes APIs
- [helm-migrate-values](https://github.com/OctopusDeployLabs/helm-migrate-values) - Plugin to migrate user-specified values across Helm chart versions to handle breaking schema changes in `values.yaml`
- [helm-monitor](https://github.com/ContainerSolutions/helm-monitor) - Plugin to
monitor a release and rollback based on Prometheus/ElasticSearch query
- [helm-release-plugin](https://github.com/JovianX/helm-release-plugin) - Plugin for Release management, Update release values, pulls(re-creates) helm Charts from deployed releases, set helm release TTL.
- [helm-s3](https://github.com/hypnoglow/helm-s3) - Helm plugin that allows to
use AWS S3 as a [private] chart repository
- [helm-schema-gen](https://github.com/karuppiah7890/helm-schema-gen) - Helm
Plugin that generates values yaml schema for your Helm 3 charts
- [helm-secrets](https://github.com/jkroepke/helm-secrets) - Plugin to manage
and store secrets safely (based on [sops](https://github.com/mozilla/sops))
- [helm-sigstore](https://github.com/sigstore/helm-sigstore) -
Plugin for Helm to integrate the [sigstore](https://sigstore.dev/) ecosystem. Search, upload and verify signed Helm charts.
- [helm-tanka](https://github.com/Duologic/helm-tanka) - A Helm plugin for
rendering Tanka/Jsonnet inside Helm charts.
- [hc-unit](https://github.com/xchapter7x/hcunit) - Plugin for unit testing
charts locally using OPA (Open Policy Agent) & Rego
- [helm-unittest](https://github.com/quintush/helm-unittest) - Plugin for unit
testing chart locally with YAML
- [helm-val](https://github.com/HamzaZo/helm-val) - A plugin to get
values from a previous release.
- [helm-external-val](https://github.com/kuuji/helm-external-val) - A plugin that fetches helm values from external sources (configMaps, Secrets, etc.)
- [helm-images](https://github.com/nikhilsbhat/helm-images) - Helm plugin to fetch all possible images from the chart before deployment or from a deployed release
- [helm-drift](https://github.com/nikhilsbhat/helm-drift) - Helm plugin that identifies the configuration that has drifted from the Helm chart
We also encourage GitHub authors to use the
[helm-plugin](https://github.com/search?q=topic%3Ahelm-plugin&type=Repositories)
tag on their plugin repositories.
## Additional Tools
Tools layered on top of Helm.
- [Aptakube](https://aptakube.com) - Desktop UI for Kubernetes and Helm Releases
- [Armada](https://airshipit.readthedocs.io/projects/armada/en/latest/) - Manage
prefixed releases throughout various Kubernetes namespaces, and removes
completed jobs for complex deployments
- [avionix](https://github.com/zbrookle/avionix) -
Python interface for generating Helm
charts and Kubernetes yaml, allowing for inheritance and less duplication of code
- [Botkube](https://botkube.io) - Run Helm commands directly from Slack,
Discord, Microsoft Teams, and Mattermost.
- [Captain](https://github.com/alauda/captain) - A Helm3 Controller using
HelmRequest and Release CRD
- [Chartify](https://github.com/appscode/chartify) - Generate Helm charts from
existing Kubernetes resources.
- [ChartMuseum](https://github.com/helm/chartmuseum) - Helm Chart Repository
with support for Amazon S3 and Google Cloud Storage
- [chart-registry](https://github.com/hangyan/chart-registry) - Helm Charts
Hosts on OCI Registry
- [Codefresh](https://codefresh.io) - Kubernetes native CI/CD and management
platform with UI dashboards for managing Helm charts and releases
- [Cyclops](https://cyclops-ui.com) - Dynamic Kubernetes UI rendering based
on Helm charts
- [Flux](https://fluxcd.io/docs/components/helm/) -
Continuous and progressive delivery from Git to Kubernetes.
- [Helmfile](https://github.com/helmfile/helmfile) - Helmfile is a declarative
spec for deploying helm charts
- [Helmper](https://github.com/ChristofferNissen/helmper) - Helmper helps you
import Helm Charts - including all OCI artifacts(images), to your own OCI
registries. Helmper also facilitates security scanning and patching of OCI
images. Helmper utilizes Helm, Oras, Trivy, Copacetic and Buildkitd.
- [Helmsman](https://github.com/Praqma/helmsman) - Helmsman is a
helm-charts-as-code tool which enables
installing/upgrading/protecting/moving/deleting releases from version
controlled desired state files (described in a simple TOML format)
- [HULL](https://github.com/vidispine/hull) - This library chart provides a
ready-to-use interface for specifying all Kubernetes objects directly in the `values.yaml`.
It removes the need to write any templates for your charts and comes with many
additional features to simplify Helm chart creation and usage.
- [Konveyor Move2Kube](https://konveyor.io/move2kube/) -
Generate Helm charts for your
existing projects.
- [Landscaper](https://github.com/Eneco/landscaper/) - "Landscaper takes a set
of Helm Chart references with values (a desired state), and realizes this in a
Kubernetes cluster."
- [Monocular](https://github.com/helm/monocular) - Web UI for Helm Chart
repositories
- [Monokle](https://monokle.io) - Desktop tool for creating, debugging and deploying Kubernetes resources and Helm Charts
- [Orkestra](https://azure.github.io/orkestra/) - A cloud-native Release
Orchestration and Lifecycle Management (LCM) platform for a related group of
Helm releases and their subcharts
- [Tanka](https://tanka.dev/helm) - Grafana Tanka configures Kubernetes
resources through Jsonnet with the ability to consume Helm Charts
- [Terraform Helm
Provider](https://github.com/hashicorp/terraform-provider-helm) - The Helm
provider for HashiCorp Terraform enables lifecycle management of Helm Charts
with a declarative infrastructure-as-code syntax. The Helm provider is often
paired the other Terraform providers, like the Kubernetes provider, to create
a common workflow across all infrastructure services.
- [VIM-Kubernetes](https://github.com/andrewstuart/vim-kubernetes) - VIM plugin
for Kubernetes and Helm
## Helm Included
Platforms, distributions, and services that include Helm support.
- [Kubernetic](https://kubernetic.com/) - Kubernetes Desktop Client
- [Jenkins X](https://jenkins-x.io/) - open source automated CI/CD for
Kubernetes which uses Helm for
[promoting](https://jenkins-x.io/docs/getting-started/promotion/) applications
through environments via GitOps
## Misc
Grab bag of useful things for Chart authors and Helm users.
- [Await](https://github.com/saltside/await) - Docker image to "await" different
conditions--especially useful for init containers. [More
Info](https://blog.slashdeploy.com/2017/02/16/introducing-await/) | helm | title Related Projects and Documentation description third party tools plugins and documentation provided by the community weight 3 aliases docs related The Helm community has produced many extra tools plugins and documentation about Helm We love to hear about these projects If you have anything you d like to add to this list please open an issue https github com helm helm www issues or pull request https github com helm helm www pulls Helm Plugins helm adopt https github com HamzaZo helm adopt A helm v3 plugin to adopt existing k8s resources into a new generated helm chart helm chartsnap https github com jlandowner helm chartsnap Snapshot testing plugin for Helm charts Helm Diff https github com databus23 helm diff Preview helm upgrade as a coloured diff Helm Dt https github com vmware labs distribution tooling for helm Plugin that helps distributing Helm charts across OCI registries and on Air gap environments Helm Dashboard https github com komodorio helm dashboard GUI for Helm visualize releases and repositories manifest diffs helm gcs https github com hayorov helm gcs Plugin to manage repositories on Google Cloud Storage helm git https github com aslafy z helm git Install charts and retrieve values files from your Git repositories helm k8comp https github com cststack k8comp Plugin to create Helm Charts from hiera using k8comp helm mapkubeapis https github com helm helm mapkubeapis Update helm release metadata to replace deprecated or removed Kubernetes APIs helm migrate values https github com OctopusDeployLabs helm migrate values Plugin to migrate user specified values across Helm chart versions to handle breaking schema changes in values yaml helm monitor https github com ContainerSolutions helm monitor Plugin to monitor a release and rollback based on Prometheus ElasticSearch query helm release plugin https github com JovianX helm release plugin Plugin for Release management Update release values pulls re creates helm Charts from deployed releases set helm release TTL helm s3 https github com hypnoglow helm s3 Helm plugin that allows to use AWS S3 as a private chart repository helm schema gen https github com karuppiah7890 helm schema gen Helm Plugin that generates values yaml schema for your Helm 3 charts helm secrets https github com jkroepke helm secrets Plugin to manage and store secrets safely based on sops https github com mozilla sops helm sigstore https github com sigstore helm sigstore Plugin for Helm to integrate the sigstore https sigstore dev ecosystem Search upload and verify signed Helm charts helm tanka https github com Duologic helm tanka A Helm plugin for rendering Tanka Jsonnet inside Helm charts hc unit https github com xchapter7x hcunit Plugin for unit testing charts locally using OPA Open Policy Agent Rego helm unittest https github com quintush helm unittest Plugin for unit testing chart locally with YAML helm val https github com HamzaZo helm val A plugin to get values from a previous release helm external val https github com kuuji helm external val A plugin that fetches helm values from external sources configMaps Secrets etc helm images https github com nikhilsbhat helm images Helm plugin to fetch all possible images from the chart before deployment or from a deployed release helm drift https github com nikhilsbhat helm drift Helm plugin that identifies the configuration that has drifted from the Helm chart We also encourage GitHub authors to use the helm plugin https github com search q topic 3Ahelm plugin type Repositories tag on their plugin repositories Additional Tools Tools layered on top of Helm Aptakube https aptakube com Desktop UI for Kubernetes and Helm Releases Armada https airshipit readthedocs io projects armada en latest Manage prefixed releases throughout various Kubernetes namespaces and removes completed jobs for complex deployments avionix https github com zbrookle avionix Python interface for generating Helm charts and Kubernetes yaml allowing for inheritance and less duplication of code Botkube https botkube io Run Helm commands directly from Slack Discord Microsoft Teams and Mattermost Captain https github com alauda captain A Helm3 Controller using HelmRequest and Release CRD Chartify https github com appscode chartify Generate Helm charts from existing Kubernetes resources ChartMuseum https github com helm chartmuseum Helm Chart Repository with support for Amazon S3 and Google Cloud Storage chart registry https github com hangyan chart registry Helm Charts Hosts on OCI Registry Codefresh https codefresh io Kubernetes native CI CD and management platform with UI dashboards for managing Helm charts and releases Cyclops https cyclops ui com Dynamic Kubernetes UI rendering based on Helm charts Flux https fluxcd io docs components helm Continuous and progressive delivery from Git to Kubernetes Helmfile https github com helmfile helmfile Helmfile is a declarative spec for deploying helm charts Helmper https github com ChristofferNissen helmper Helmper helps you import Helm Charts including all OCI artifacts images to your own OCI registries Helmper also facilitates security scanning and patching of OCI images Helmper utilizes Helm Oras Trivy Copacetic and Buildkitd Helmsman https github com Praqma helmsman Helmsman is a helm charts as code tool which enables installing upgrading protecting moving deleting releases from version controlled desired state files described in a simple TOML format HULL https github com vidispine hull This library chart provides a ready to use interface for specifying all Kubernetes objects directly in the values yaml It removes the need to write any templates for your charts and comes with many additional features to simplify Helm chart creation and usage Konveyor Move2Kube https konveyor io move2kube Generate Helm charts for your existing projects Landscaper https github com Eneco landscaper Landscaper takes a set of Helm Chart references with values a desired state and realizes this in a Kubernetes cluster Monocular https github com helm monocular Web UI for Helm Chart repositories Monokle https monokle io Desktop tool for creating debugging and deploying Kubernetes resources and Helm Charts Orkestra https azure github io orkestra A cloud native Release Orchestration and Lifecycle Management LCM platform for a related group of Helm releases and their subcharts Tanka https tanka dev helm Grafana Tanka configures Kubernetes resources through Jsonnet with the ability to consume Helm Charts Terraform Helm Provider https github com hashicorp terraform provider helm The Helm provider for HashiCorp Terraform enables lifecycle management of Helm Charts with a declarative infrastructure as code syntax The Helm provider is often paired the other Terraform providers like the Kubernetes provider to create a common workflow across all infrastructure services VIM Kubernetes https github com andrewstuart vim kubernetes VIM plugin for Kubernetes and Helm Helm Included Platforms distributions and services that include Helm support Kubernetic https kubernetic com Kubernetes Desktop Client Jenkins X https jenkins x io open source automated CI CD for Kubernetes which uses Helm for promoting https jenkins x io docs getting started promotion applications through environments via GitOps Misc Grab bag of useful things for Chart authors and Helm users Await https github com saltside await Docker image to await different conditions especially useful for init containers More Info https blog slashdeploy com 2017 02 16 introducing await |
helm Synopsis The Helm package manager for Kubernetes title Helm helm | ---
title: "Helm"
---
## helm
The Helm package manager for Kubernetes.
### Synopsis
The Kubernetes package manager
Common actions for Helm:
- helm search: search for charts
- helm pull: download a chart to your local directory to view
- helm install: upload the chart to Kubernetes
- helm list: list releases of charts
Environment variables:
| Name | Description |
|------------------------------------|------------------------------------------------------------------------------------------------------------|
| $HELM_CACHE_HOME | set an alternative location for storing cached files. |
| $HELM_CONFIG_HOME | set an alternative location for storing Helm configuration. |
| $HELM_DATA_HOME | set an alternative location for storing Helm data. |
| $HELM_DEBUG | indicate whether or not Helm is running in Debug mode |
| $HELM_DRIVER | set the backend storage driver. Values are: configmap, secret, memory, sql. |
| $HELM_DRIVER_SQL_CONNECTION_STRING | set the connection string the SQL storage driver should use. |
| $HELM_MAX_HISTORY | set the maximum number of helm release history. |
| $HELM_NAMESPACE | set the namespace used for the helm operations. |
| $HELM_NO_PLUGINS | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. |
| $HELM_PLUGINS | set the path to the plugins directory |
| $HELM_REGISTRY_CONFIG | set the path to the registry config file. |
| $HELM_REPOSITORY_CACHE | set the path to the repository cache directory |
| $HELM_REPOSITORY_CONFIG | set the path to the repositories file. |
| $KUBECONFIG | set an alternative Kubernetes configuration file (default "~/.kube/config") |
| $HELM_KUBEAPISERVER | set the Kubernetes API Server Endpoint for authentication |
| $HELM_KUBECAFILE | set the Kubernetes certificate authority file. |
| $HELM_KUBEASGROUPS | set the Groups to use for impersonation using a comma-separated list. |
| $HELM_KUBEASUSER | set the Username to impersonate for the operation. |
| $HELM_KUBECONTEXT | set the name of the kubeconfig context. |
| $HELM_KUBETOKEN | set the Bearer KubeToken used for authentication. |
| $HELM_KUBEINSECURE_SKIP_TLS_VERIFY | indicate if the Kubernetes API server's certificate validation should be skipped (insecure) |
| $HELM_KUBETLS_SERVER_NAME | set the server name used to validate the Kubernetes API server certificate |
| $HELM_BURST_LIMIT | set the default burst limit in the case the server contains many CRDs (default 100, -1 to disable) |
| $HELM_QPS | set the Queries Per Second in cases where a high number of calls exceed the option for higher burst values |
Helm stores cache, configuration, and data based on the following configuration order:
- If a HELM_*_HOME environment variable is set, it will be used
- Otherwise, on systems supporting the XDG base directory specification, the XDG variables will be used
- When no other location is set a default location will be used based on the operating system
By default, the default directories depend on the Operating System. The defaults are listed below:
| Operating System | Cache Path | Configuration Path | Data Path |
|------------------|---------------------------|--------------------------------|-------------------------|
| Linux | $HOME/.cache/helm | $HOME/.config/helm | $HOME/.local/share/helm |
| macOS | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm |
| Windows | %TEMP%\helm | %APPDATA%\helm | %APPDATA%\helm |
### Options
```
--burst-limit int client-side default throttling limit (default 100)
--debug enable verbose output
-h, --help help for helm
--kube-apiserver string the address and the port for the Kubernetes API server
--kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--kube-as-user string username to impersonate for the operation
--kube-ca-file string the certificate authority file for the Kubernetes API server connection
--kube-context string name of the kubeconfig context to use
--kube-insecure-skip-tls-verify if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure
--kube-tls-server-name string server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used
--kube-token string bearer token used for authentication
--kubeconfig string path to the kubeconfig file
-n, --namespace string namespace scope for this request
--qps float32 queries per second used when communicating with the Kubernetes API, not including bursting
--registry-config string path to the registry config file (default "~/.config/helm/registry/config.json")
--repository-cache string path to the directory containing cached repository indexes (default "~/.cache/helm/repository")
--repository-config string path to the file containing repository names and URLs (default "~/.config/helm/repositories.yaml")
```
### SEE ALSO
* [helm completion](helm_completion.md) - generate autocompletion scripts for the specified shell
* [helm create](helm_create.md) - create a new chart with the given name
* [helm dependency](helm_dependency.md) - manage a chart's dependencies
* [helm env](helm_env.md) - helm client environment information
* [helm get](helm_get.md) - download extended information of a named release
* [helm history](helm_history.md) - fetch release history
* [helm install](helm_install.md) - install a chart
* [helm lint](helm_lint.md) - examine a chart for possible issues
* [helm list](helm_list.md) - list releases
* [helm package](helm_package.md) - package a chart directory into a chart archive
* [helm plugin](helm_plugin.md) - install, list, or uninstall Helm plugins
* [helm pull](helm_pull.md) - download a chart from a repository and (optionally) unpack it in local directory
* [helm push](helm_push.md) - push a chart to remote
* [helm registry](helm_registry.md) - login to or logout from a registry
* [helm repo](helm_repo.md) - add, list, remove, update, and index chart repositories
* [helm rollback](helm_rollback.md) - roll back a release to a previous revision
* [helm search](helm_search.md) - search for a keyword in charts
* [helm show](helm_show.md) - show information of a chart
* [helm status](helm_status.md) - display the status of the named release
* [helm template](helm_template.md) - locally render templates
* [helm test](helm_test.md) - run tests for a release
* [helm uninstall](helm_uninstall.md) - uninstall a release
* [helm upgrade](helm_upgrade.md) - upgrade a release
* [helm verify](helm_verify.md) - verify that a chart at the given path has been signed and is valid
* [helm version](helm_version.md) - print the client version information
###### | helm | title Helm helm The Helm package manager for Kubernetes Synopsis The Kubernetes package manager Common actions for Helm helm search search for charts helm pull download a chart to your local directory to view helm install upload the chart to Kubernetes helm list list releases of charts Environment variables Name Description HELM CACHE HOME set an alternative location for storing cached files HELM CONFIG HOME set an alternative location for storing Helm configuration HELM DATA HOME set an alternative location for storing Helm data HELM DEBUG indicate whether or not Helm is running in Debug mode HELM DRIVER set the backend storage driver Values are configmap secret memory sql HELM DRIVER SQL CONNECTION STRING set the connection string the SQL storage driver should use HELM MAX HISTORY set the maximum number of helm release history HELM NAMESPACE set the namespace used for the helm operations HELM NO PLUGINS disable plugins Set HELM NO PLUGINS 1 to disable plugins HELM PLUGINS set the path to the plugins directory HELM REGISTRY CONFIG set the path to the registry config file HELM REPOSITORY CACHE set the path to the repository cache directory HELM REPOSITORY CONFIG set the path to the repositories file KUBECONFIG set an alternative Kubernetes configuration file default kube config HELM KUBEAPISERVER set the Kubernetes API Server Endpoint for authentication HELM KUBECAFILE set the Kubernetes certificate authority file HELM KUBEASGROUPS set the Groups to use for impersonation using a comma separated list HELM KUBEASUSER set the Username to impersonate for the operation HELM KUBECONTEXT set the name of the kubeconfig context HELM KUBETOKEN set the Bearer KubeToken used for authentication HELM KUBEINSECURE SKIP TLS VERIFY indicate if the Kubernetes API server s certificate validation should be skipped insecure HELM KUBETLS SERVER NAME set the server name used to validate the Kubernetes API server certificate HELM BURST LIMIT set the default burst limit in the case the server contains many CRDs default 100 1 to disable HELM QPS set the Queries Per Second in cases where a high number of calls exceed the option for higher burst values Helm stores cache configuration and data based on the following configuration order If a HELM HOME environment variable is set it will be used Otherwise on systems supporting the XDG base directory specification the XDG variables will be used When no other location is set a default location will be used based on the operating system By default the default directories depend on the Operating System The defaults are listed below Operating System Cache Path Configuration Path Data Path Linux HOME cache helm HOME config helm HOME local share helm macOS HOME Library Caches helm HOME Library Preferences helm HOME Library helm Windows TEMP helm APPDATA helm APPDATA helm Options burst limit int client side default throttling limit default 100 debug enable verbose output h help help for helm kube apiserver string the address and the port for the Kubernetes API server kube as group stringArray group to impersonate for the operation this flag can be repeated to specify multiple groups kube as user string username to impersonate for the operation kube ca file string the certificate authority file for the Kubernetes API server connection kube context string name of the kubeconfig context to use kube insecure skip tls verify if true the Kubernetes API server s certificate will not be checked for validity This will make your HTTPS connections insecure kube tls server name string server name to use for Kubernetes API server certificate validation If it is not provided the hostname used to contact the server is used kube token string bearer token used for authentication kubeconfig string path to the kubeconfig file n namespace string namespace scope for this request qps float32 queries per second used when communicating with the Kubernetes API not including bursting registry config string path to the registry config file default config helm registry config json repository cache string path to the directory containing cached repository indexes default cache helm repository repository config string path to the file containing repository names and URLs default config helm repositories yaml SEE ALSO helm completion helm completion md generate autocompletion scripts for the specified shell helm create helm create md create a new chart with the given name helm dependency helm dependency md manage a chart s dependencies helm env helm env md helm client environment information helm get helm get md download extended information of a named release helm history helm history md fetch release history helm install helm install md install a chart helm lint helm lint md examine a chart for possible issues helm list helm list md list releases helm package helm package md package a chart directory into a chart archive helm plugin helm plugin md install list or uninstall Helm plugins helm pull helm pull md download a chart from a repository and optionally unpack it in local directory helm push helm push md push a chart to remote helm registry helm registry md login to or logout from a registry helm repo helm repo md add list remove update and index chart repositories helm rollback helm rollback md roll back a release to a previous revision helm search helm search md search for a keyword in charts helm show helm show md show information of a chart helm status helm status md display the status of the named release helm template helm template md locally render templates helm test helm test md run tests for a release helm uninstall helm uninstall md uninstall a release helm upgrade helm upgrade md upgrade a release helm verify helm verify md verify that a chart at the given path has been signed and is valid helm version helm version md print the client version information |
helm install a chart title Helm Install Synopsis helm install | ---
title: "Helm Install"
---
## helm install
install a chart
### Synopsis
This command installs a chart archive.
The install argument must be a chart reference, a path to a packaged chart,
a path to an unpacked chart directory or a URL.
To override values in a chart, use either the '--values' flag and pass in a file
or use the '--set' flag and pass configuration from the command line, to force
a string value use '--set-string'. You can use '--set-file' to set individual
values from a file when the value itself is too long for the command line
or is dynamically generated. You can also use '--set-json' to set json values
(scalars/objects/arrays) from the command line.
$ helm install -f myvalues.yaml myredis ./redis
or
$ helm install --set name=prod myredis ./redis
or
$ helm install --set-string long_int=1234567890 myredis ./redis
or
$ helm install --set-file my_script=dothings.sh myredis ./redis
or
$ helm install --set-json 'master.sidecars=[{"name":"sidecar","image":"myImage","imagePullPolicy":"Always","ports":[{"name":"portname","containerPort":1234}]}]' myredis ./redis
You can specify the '--values'/'-f' flag multiple times. The priority will be given to the
last (right-most) file specified. For example, if both myvalues.yaml and override.yaml
contained a key called 'Test', the value set in override.yaml would take precedence:
$ helm install -f myvalues.yaml -f override.yaml myredis ./redis
You can specify the '--set' flag multiple times. The priority will be given to the
last (right-most) set specified. For example, if both 'bar' and 'newbar' values are
set for a key called 'foo', the 'newbar' value would take precedence:
$ helm install --set foo=bar --set foo=newbar myredis ./redis
Similarly, in the following example 'foo' is set to '["four"]':
$ helm install --set-json='foo=["one", "two", "three"]' --set-json='foo=["four"]' myredis ./redis
And in the following example, 'foo' is set to '{"key1":"value1","key2":"bar"}':
$ helm install --set-json='foo={"key1":"value1","key2":"value2"}' --set-json='foo.key2="bar"' myredis ./redis
To check the generated manifests of a release without installing the chart,
the --debug and --dry-run flags can be combined.
The --dry-run flag will output all generated chart manifests, including Secrets
which can contain sensitive values. To hide Kubernetes Secrets use the
--hide-secret flag. Please carefully consider how and when these flags are used.
If --verify is set, the chart MUST have a provenance file, and the provenance
file MUST pass all verification steps.
There are six different ways you can express the chart you want to install:
1. By chart reference: helm install mymaria example/mariadb
2. By path to a packaged chart: helm install mynginx ./nginx-1.2.3.tgz
3. By path to an unpacked chart directory: helm install mynginx ./nginx
4. By absolute URL: helm install mynginx https://example.com/charts/nginx-1.2.3.tgz
5. By chart reference and repo url: helm install --repo https://example.com/charts/ mynginx nginx
6. By OCI registries: helm install mynginx --version 1.2.3 oci://example.com/charts/nginx
CHART REFERENCES
A chart reference is a convenient way of referencing a chart in a chart repository.
When you use a chart reference with a repo prefix ('example/mariadb'), Helm will look in the local
configuration for a chart repository named 'example', and will then look for a
chart in that repository whose name is 'mariadb'. It will install the latest stable version of that chart
until you specify '--devel' flag to also include development version (alpha, beta, and release candidate releases), or
supply a version number with the '--version' flag.
To see the list of chart repositories, use 'helm repo list'. To search for
charts in a repository, use 'helm search'.
```
helm install [NAME] [CHART] [flags]
```
### Options
```
--atomic if set, the installation process deletes the installation on failure. The --wait flag will be set automatically if --atomic is used
--ca-file string verify certificates of HTTPS-enabled servers using this CA bundle
--cert-file string identify HTTPS client using this SSL certificate file
--create-namespace create the release namespace if not present
--dependency-update update dependencies if they are missing before installing the chart
--description string add a custom description
--devel use development versions, too. Equivalent to version '>0.0.0-0'. If --version is set, this is ignored
--disable-openapi-validation if set, the installation process will not validate rendered templates against the Kubernetes OpenAPI Schema
--dry-run string[="client"] simulate an install. If --dry-run is set with no option being specified or as '--dry-run=client', it will not attempt cluster connections. Setting '--dry-run=server' allows attempting cluster connections.
--enable-dns enable DNS lookups when rendering templates
--force force resource updates through a replacement strategy
-g, --generate-name generate the name (and omit the NAME parameter)
-h, --help help for install
--hide-notes if set, do not show notes in install output. Does not affect presence in chart metadata
--hide-secret hide Kubernetes Secrets when also using the --dry-run flag
--insecure-skip-tls-verify skip tls certificate checks for the chart download
--key-file string identify HTTPS client using this SSL key file
--keyring string location of public keys used for verification (default "~/.gnupg/pubring.gpg")
-l, --labels stringToString Labels that would be added to release metadata. Should be divided by comma. (default [])
--name-template string specify template used to name the release
--no-hooks prevent hooks from running during install
-o, --output format prints the output in the specified format. Allowed values: table, json, yaml (default table)
--pass-credentials pass credentials to all domains
--password string chart repository password where to locate the requested chart
--plain-http use insecure HTTP connections for the chart download
--post-renderer postRendererString the path to an executable to be used for post rendering. If it exists in $PATH, the binary will be used, otherwise it will try to look for the executable at the given path
--post-renderer-args postRendererArgsSlice an argument to the post-renderer (can specify multiple) (default [])
--render-subchart-notes if set, render subchart notes along with the parent
--replace re-use the given name, only if that name is a deleted release which remains in the history. This is unsafe in production
--repo string chart repository url where to locate the requested chart
--set stringArray set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
--set-json stringArray set JSON values on the command line (can specify multiple or separate values with commas: key1=jsonval1,key2=jsonval2)
--set-literal stringArray set a literal STRING value on the command line
--set-string stringArray set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
--skip-crds if set, no CRDs will be installed. By default, CRDs are installed if not already present
--skip-schema-validation if set, disables JSON schema validation
--timeout duration time to wait for any individual Kubernetes operation (like Jobs for hooks) (default 5m0s)
--username string chart repository username where to locate the requested chart
-f, --values strings specify values in a YAML file or a URL (can specify multiple)
--verify verify the package before using it
--version string specify a version constraint for the chart version to use. This constraint can be a specific tag (e.g. 1.1.1) or it may reference a valid range (e.g. ^2.0.0). If this is not specified, the latest version is used
--wait if set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout
--wait-for-jobs if set and --wait enabled, will wait until all Jobs have been completed before marking the release as successful. It will wait for as long as --timeout
```
### Options inherited from parent commands
```
--burst-limit int client-side default throttling limit (default 100)
--debug enable verbose output
--kube-apiserver string the address and the port for the Kubernetes API server
--kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--kube-as-user string username to impersonate for the operation
--kube-ca-file string the certificate authority file for the Kubernetes API server connection
--kube-context string name of the kubeconfig context to use
--kube-insecure-skip-tls-verify if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure
--kube-tls-server-name string server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used
--kube-token string bearer token used for authentication
--kubeconfig string path to the kubeconfig file
-n, --namespace string namespace scope for this request
--qps float32 queries per second used when communicating with the Kubernetes API, not including bursting
--registry-config string path to the registry config file (default "~/.config/helm/registry/config.json")
--repository-cache string path to the directory containing cached repository indexes (default "~/.cache/helm/repository")
--repository-config string path to the file containing repository names and URLs (default "~/.config/helm/repositories.yaml")
```
### SEE ALSO
* [helm](helm.md) - The Helm package manager for Kubernetes.
###### | helm | title Helm Install helm install install a chart Synopsis This command installs a chart archive The install argument must be a chart reference a path to a packaged chart a path to an unpacked chart directory or a URL To override values in a chart use either the values flag and pass in a file or use the set flag and pass configuration from the command line to force a string value use set string You can use set file to set individual values from a file when the value itself is too long for the command line or is dynamically generated You can also use set json to set json values scalars objects arrays from the command line helm install f myvalues yaml myredis redis or helm install set name prod myredis redis or helm install set string long int 1234567890 myredis redis or helm install set file my script dothings sh myredis redis or helm install set json master sidecars name sidecar image myImage imagePullPolicy Always ports name portname containerPort 1234 myredis redis You can specify the values f flag multiple times The priority will be given to the last right most file specified For example if both myvalues yaml and override yaml contained a key called Test the value set in override yaml would take precedence helm install f myvalues yaml f override yaml myredis redis You can specify the set flag multiple times The priority will be given to the last right most set specified For example if both bar and newbar values are set for a key called foo the newbar value would take precedence helm install set foo bar set foo newbar myredis redis Similarly in the following example foo is set to four helm install set json foo one two three set json foo four myredis redis And in the following example foo is set to key1 value1 key2 bar helm install set json foo key1 value1 key2 value2 set json foo key2 bar myredis redis To check the generated manifests of a release without installing the chart the debug and dry run flags can be combined The dry run flag will output all generated chart manifests including Secrets which can contain sensitive values To hide Kubernetes Secrets use the hide secret flag Please carefully consider how and when these flags are used If verify is set the chart MUST have a provenance file and the provenance file MUST pass all verification steps There are six different ways you can express the chart you want to install 1 By chart reference helm install mymaria example mariadb 2 By path to a packaged chart helm install mynginx nginx 1 2 3 tgz 3 By path to an unpacked chart directory helm install mynginx nginx 4 By absolute URL helm install mynginx https example com charts nginx 1 2 3 tgz 5 By chart reference and repo url helm install repo https example com charts mynginx nginx 6 By OCI registries helm install mynginx version 1 2 3 oci example com charts nginx CHART REFERENCES A chart reference is a convenient way of referencing a chart in a chart repository When you use a chart reference with a repo prefix example mariadb Helm will look in the local configuration for a chart repository named example and will then look for a chart in that repository whose name is mariadb It will install the latest stable version of that chart until you specify devel flag to also include development version alpha beta and release candidate releases or supply a version number with the version flag To see the list of chart repositories use helm repo list To search for charts in a repository use helm search helm install NAME CHART flags Options atomic if set the installation process deletes the installation on failure The wait flag will be set automatically if atomic is used ca file string verify certificates of HTTPS enabled servers using this CA bundle cert file string identify HTTPS client using this SSL certificate file create namespace create the release namespace if not present dependency update update dependencies if they are missing before installing the chart description string add a custom description devel use development versions too Equivalent to version 0 0 0 0 If version is set this is ignored disable openapi validation if set the installation process will not validate rendered templates against the Kubernetes OpenAPI Schema dry run string client simulate an install If dry run is set with no option being specified or as dry run client it will not attempt cluster connections Setting dry run server allows attempting cluster connections enable dns enable DNS lookups when rendering templates force force resource updates through a replacement strategy g generate name generate the name and omit the NAME parameter h help help for install hide notes if set do not show notes in install output Does not affect presence in chart metadata hide secret hide Kubernetes Secrets when also using the dry run flag insecure skip tls verify skip tls certificate checks for the chart download key file string identify HTTPS client using this SSL key file keyring string location of public keys used for verification default gnupg pubring gpg l labels stringToString Labels that would be added to release metadata Should be divided by comma default name template string specify template used to name the release no hooks prevent hooks from running during install o output format prints the output in the specified format Allowed values table json yaml default table pass credentials pass credentials to all domains password string chart repository password where to locate the requested chart plain http use insecure HTTP connections for the chart download post renderer postRendererString the path to an executable to be used for post rendering If it exists in PATH the binary will be used otherwise it will try to look for the executable at the given path post renderer args postRendererArgsSlice an argument to the post renderer can specify multiple default render subchart notes if set render subchart notes along with the parent replace re use the given name only if that name is a deleted release which remains in the history This is unsafe in production repo string chart repository url where to locate the requested chart set stringArray set values on the command line can specify multiple or separate values with commas key1 val1 key2 val2 set file stringArray set values from respective files specified via the command line can specify multiple or separate values with commas key1 path1 key2 path2 set json stringArray set JSON values on the command line can specify multiple or separate values with commas key1 jsonval1 key2 jsonval2 set literal stringArray set a literal STRING value on the command line set string stringArray set STRING values on the command line can specify multiple or separate values with commas key1 val1 key2 val2 skip crds if set no CRDs will be installed By default CRDs are installed if not already present skip schema validation if set disables JSON schema validation timeout duration time to wait for any individual Kubernetes operation like Jobs for hooks default 5m0s username string chart repository username where to locate the requested chart f values strings specify values in a YAML file or a URL can specify multiple verify verify the package before using it version string specify a version constraint for the chart version to use This constraint can be a specific tag e g 1 1 1 or it may reference a valid range e g 2 0 0 If this is not specified the latest version is used wait if set will wait until all Pods PVCs Services and minimum number of Pods of a Deployment StatefulSet or ReplicaSet are in a ready state before marking the release as successful It will wait for as long as timeout wait for jobs if set and wait enabled will wait until all Jobs have been completed before marking the release as successful It will wait for as long as timeout Options inherited from parent commands burst limit int client side default throttling limit default 100 debug enable verbose output kube apiserver string the address and the port for the Kubernetes API server kube as group stringArray group to impersonate for the operation this flag can be repeated to specify multiple groups kube as user string username to impersonate for the operation kube ca file string the certificate authority file for the Kubernetes API server connection kube context string name of the kubeconfig context to use kube insecure skip tls verify if true the Kubernetes API server s certificate will not be checked for validity This will make your HTTPS connections insecure kube tls server name string server name to use for Kubernetes API server certificate validation If it is not provided the hostname used to contact the server is used kube token string bearer token used for authentication kubeconfig string path to the kubeconfig file n namespace string namespace scope for this request qps float32 queries per second used when communicating with the Kubernetes API not including bursting registry config string path to the registry config file default config helm registry config json repository cache string path to the directory containing cached repository indexes default cache helm repository repository config string path to the file containing repository names and URLs default config helm repositories yaml SEE ALSO helm helm md The Helm package manager for Kubernetes |
helm upgrade a release helm upgrade Synopsis title Helm Upgrade | ---
title: "Helm Upgrade"
---
## helm upgrade
upgrade a release
### Synopsis
This command upgrades a release to a new version of a chart.
The upgrade arguments must be a release and chart. The chart
argument can be either: a chart reference('example/mariadb'), a path to a chart directory,
a packaged chart, or a fully qualified URL. For chart references, the latest
version will be specified unless the '--version' flag is set.
To override values in a chart, use either the '--values' flag and pass in a file
or use the '--set' flag and pass configuration from the command line, to force string
values, use '--set-string'. You can use '--set-file' to set individual
values from a file when the value itself is too long for the command line
or is dynamically generated. You can also use '--set-json' to set json values
(scalars/objects/arrays) from the command line.
You can specify the '--values'/'-f' flag multiple times. The priority will be given to the
last (right-most) file specified. For example, if both myvalues.yaml and override.yaml
contained a key called 'Test', the value set in override.yaml would take precedence:
$ helm upgrade -f myvalues.yaml -f override.yaml redis ./redis
You can specify the '--set' flag multiple times. The priority will be given to the
last (right-most) set specified. For example, if both 'bar' and 'newbar' values are
set for a key called 'foo', the 'newbar' value would take precedence:
$ helm upgrade --set foo=bar --set foo=newbar redis ./redis
You can update the values for an existing release with this command as well via the
'--reuse-values' flag. The 'RELEASE' and 'CHART' arguments should be set to the original
parameters, and existing values will be merged with any values set via '--values'/'-f'
or '--set' flags. Priority is given to new values.
$ helm upgrade --reuse-values --set foo=bar --set foo=newbar redis ./redis
The --dry-run flag will output all generated chart manifests, including Secrets
which can contain sensitive values. To hide Kubernetes Secrets use the
--hide-secret flag. Please carefully consider how and when these flags are used.
```
helm upgrade [RELEASE] [CHART] [flags]
```
### Options
```
--atomic if set, upgrade process rolls back changes made in case of failed upgrade. The --wait flag will be set automatically if --atomic is used
--ca-file string verify certificates of HTTPS-enabled servers using this CA bundle
--cert-file string identify HTTPS client using this SSL certificate file
--cleanup-on-fail allow deletion of new resources created in this upgrade when upgrade fails
--create-namespace if --install is set, create the release namespace if not present
--dependency-update update dependencies if they are missing before installing the chart
--description string add a custom description
--devel use development versions, too. Equivalent to version '>0.0.0-0'. If --version is set, this is ignored
--disable-openapi-validation if set, the upgrade process will not validate rendered templates against the Kubernetes OpenAPI Schema
--dry-run string[="client"] simulate an install. If --dry-run is set with no option being specified or as '--dry-run=client', it will not attempt cluster connections. Setting '--dry-run=server' allows attempting cluster connections.
--enable-dns enable DNS lookups when rendering templates
--force force resource updates through a replacement strategy
-h, --help help for upgrade
--hide-notes if set, do not show notes in upgrade output. Does not affect presence in chart metadata
--hide-secret hide Kubernetes Secrets when also using the --dry-run flag
--history-max int limit the maximum number of revisions saved per release. Use 0 for no limit (default 10)
--insecure-skip-tls-verify skip tls certificate checks for the chart download
-i, --install if a release by this name doesn't already exist, run an install
--key-file string identify HTTPS client using this SSL key file
--keyring string location of public keys used for verification (default "~/.gnupg/pubring.gpg")
-l, --labels stringToString Labels that would be added to release metadata. Should be separated by comma. Original release labels will be merged with upgrade labels. You can unset label using null. (default [])
--no-hooks disable pre/post upgrade hooks
-o, --output format prints the output in the specified format. Allowed values: table, json, yaml (default table)
--pass-credentials pass credentials to all domains
--password string chart repository password where to locate the requested chart
--plain-http use insecure HTTP connections for the chart download
--post-renderer postRendererString the path to an executable to be used for post rendering. If it exists in $PATH, the binary will be used, otherwise it will try to look for the executable at the given path
--post-renderer-args postRendererArgsSlice an argument to the post-renderer (can specify multiple) (default [])
--render-subchart-notes if set, render subchart notes along with the parent
--repo string chart repository url where to locate the requested chart
--reset-then-reuse-values when upgrading, reset the values to the ones built into the chart, apply the last release's values and merge in any overrides from the command line via --set and -f. If '--reset-values' or '--reuse-values' is specified, this is ignored
--reset-values when upgrading, reset the values to the ones built into the chart
--reuse-values when upgrading, reuse the last release's values and merge in any overrides from the command line via --set and -f. If '--reset-values' is specified, this is ignored
--set stringArray set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
--set-json stringArray set JSON values on the command line (can specify multiple or separate values with commas: key1=jsonval1,key2=jsonval2)
--set-literal stringArray set a literal STRING value on the command line
--set-string stringArray set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
--skip-crds if set, no CRDs will be installed when an upgrade is performed with install flag enabled. By default, CRDs are installed if not already present, when an upgrade is performed with install flag enabled
--skip-schema-validation if set, disables JSON schema validation
--timeout duration time to wait for any individual Kubernetes operation (like Jobs for hooks) (default 5m0s)
--username string chart repository username where to locate the requested chart
-f, --values strings specify values in a YAML file or a URL (can specify multiple)
--verify verify the package before using it
--version string specify a version constraint for the chart version to use. This constraint can be a specific tag (e.g. 1.1.1) or it may reference a valid range (e.g. ^2.0.0). If this is not specified, the latest version is used
--wait if set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout
--wait-for-jobs if set and --wait enabled, will wait until all Jobs have been completed before marking the release as successful. It will wait for as long as --timeout
```
### Options inherited from parent commands
```
--burst-limit int client-side default throttling limit (default 100)
--debug enable verbose output
--kube-apiserver string the address and the port for the Kubernetes API server
--kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--kube-as-user string username to impersonate for the operation
--kube-ca-file string the certificate authority file for the Kubernetes API server connection
--kube-context string name of the kubeconfig context to use
--kube-insecure-skip-tls-verify if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure
--kube-tls-server-name string server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used
--kube-token string bearer token used for authentication
--kubeconfig string path to the kubeconfig file
-n, --namespace string namespace scope for this request
--qps float32 queries per second used when communicating with the Kubernetes API, not including bursting
--registry-config string path to the registry config file (default "~/.config/helm/registry/config.json")
--repository-cache string path to the directory containing cached repository indexes (default "~/.cache/helm/repository")
--repository-config string path to the file containing repository names and URLs (default "~/.config/helm/repositories.yaml")
```
### SEE ALSO
* [helm](helm.md) - The Helm package manager for Kubernetes.
###### | helm | title Helm Upgrade helm upgrade upgrade a release Synopsis This command upgrades a release to a new version of a chart The upgrade arguments must be a release and chart The chart argument can be either a chart reference example mariadb a path to a chart directory a packaged chart or a fully qualified URL For chart references the latest version will be specified unless the version flag is set To override values in a chart use either the values flag and pass in a file or use the set flag and pass configuration from the command line to force string values use set string You can use set file to set individual values from a file when the value itself is too long for the command line or is dynamically generated You can also use set json to set json values scalars objects arrays from the command line You can specify the values f flag multiple times The priority will be given to the last right most file specified For example if both myvalues yaml and override yaml contained a key called Test the value set in override yaml would take precedence helm upgrade f myvalues yaml f override yaml redis redis You can specify the set flag multiple times The priority will be given to the last right most set specified For example if both bar and newbar values are set for a key called foo the newbar value would take precedence helm upgrade set foo bar set foo newbar redis redis You can update the values for an existing release with this command as well via the reuse values flag The RELEASE and CHART arguments should be set to the original parameters and existing values will be merged with any values set via values f or set flags Priority is given to new values helm upgrade reuse values set foo bar set foo newbar redis redis The dry run flag will output all generated chart manifests including Secrets which can contain sensitive values To hide Kubernetes Secrets use the hide secret flag Please carefully consider how and when these flags are used helm upgrade RELEASE CHART flags Options atomic if set upgrade process rolls back changes made in case of failed upgrade The wait flag will be set automatically if atomic is used ca file string verify certificates of HTTPS enabled servers using this CA bundle cert file string identify HTTPS client using this SSL certificate file cleanup on fail allow deletion of new resources created in this upgrade when upgrade fails create namespace if install is set create the release namespace if not present dependency update update dependencies if they are missing before installing the chart description string add a custom description devel use development versions too Equivalent to version 0 0 0 0 If version is set this is ignored disable openapi validation if set the upgrade process will not validate rendered templates against the Kubernetes OpenAPI Schema dry run string client simulate an install If dry run is set with no option being specified or as dry run client it will not attempt cluster connections Setting dry run server allows attempting cluster connections enable dns enable DNS lookups when rendering templates force force resource updates through a replacement strategy h help help for upgrade hide notes if set do not show notes in upgrade output Does not affect presence in chart metadata hide secret hide Kubernetes Secrets when also using the dry run flag history max int limit the maximum number of revisions saved per release Use 0 for no limit default 10 insecure skip tls verify skip tls certificate checks for the chart download i install if a release by this name doesn t already exist run an install key file string identify HTTPS client using this SSL key file keyring string location of public keys used for verification default gnupg pubring gpg l labels stringToString Labels that would be added to release metadata Should be separated by comma Original release labels will be merged with upgrade labels You can unset label using null default no hooks disable pre post upgrade hooks o output format prints the output in the specified format Allowed values table json yaml default table pass credentials pass credentials to all domains password string chart repository password where to locate the requested chart plain http use insecure HTTP connections for the chart download post renderer postRendererString the path to an executable to be used for post rendering If it exists in PATH the binary will be used otherwise it will try to look for the executable at the given path post renderer args postRendererArgsSlice an argument to the post renderer can specify multiple default render subchart notes if set render subchart notes along with the parent repo string chart repository url where to locate the requested chart reset then reuse values when upgrading reset the values to the ones built into the chart apply the last release s values and merge in any overrides from the command line via set and f If reset values or reuse values is specified this is ignored reset values when upgrading reset the values to the ones built into the chart reuse values when upgrading reuse the last release s values and merge in any overrides from the command line via set and f If reset values is specified this is ignored set stringArray set values on the command line can specify multiple or separate values with commas key1 val1 key2 val2 set file stringArray set values from respective files specified via the command line can specify multiple or separate values with commas key1 path1 key2 path2 set json stringArray set JSON values on the command line can specify multiple or separate values with commas key1 jsonval1 key2 jsonval2 set literal stringArray set a literal STRING value on the command line set string stringArray set STRING values on the command line can specify multiple or separate values with commas key1 val1 key2 val2 skip crds if set no CRDs will be installed when an upgrade is performed with install flag enabled By default CRDs are installed if not already present when an upgrade is performed with install flag enabled skip schema validation if set disables JSON schema validation timeout duration time to wait for any individual Kubernetes operation like Jobs for hooks default 5m0s username string chart repository username where to locate the requested chart f values strings specify values in a YAML file or a URL can specify multiple verify verify the package before using it version string specify a version constraint for the chart version to use This constraint can be a specific tag e g 1 1 1 or it may reference a valid range e g 2 0 0 If this is not specified the latest version is used wait if set will wait until all Pods PVCs Services and minimum number of Pods of a Deployment StatefulSet or ReplicaSet are in a ready state before marking the release as successful It will wait for as long as timeout wait for jobs if set and wait enabled will wait until all Jobs have been completed before marking the release as successful It will wait for as long as timeout Options inherited from parent commands burst limit int client side default throttling limit default 100 debug enable verbose output kube apiserver string the address and the port for the Kubernetes API server kube as group stringArray group to impersonate for the operation this flag can be repeated to specify multiple groups kube as user string username to impersonate for the operation kube ca file string the certificate authority file for the Kubernetes API server connection kube context string name of the kubeconfig context to use kube insecure skip tls verify if true the Kubernetes API server s certificate will not be checked for validity This will make your HTTPS connections insecure kube tls server name string server name to use for Kubernetes API server certificate validation If it is not provided the hostname used to contact the server is used kube token string bearer token used for authentication kubeconfig string path to the kubeconfig file n namespace string namespace scope for this request qps float32 queries per second used when communicating with the Kubernetes API not including bursting registry config string path to the registry config file default config helm registry config json repository cache string path to the directory containing cached repository indexes default cache helm repository repository config string path to the file containing repository names and URLs default config helm repositories yaml SEE ALSO helm helm md The Helm package manager for Kubernetes |
helm helm template title Helm Template locally render templates Synopsis | ---
title: "Helm Template"
---
## helm template
locally render templates
### Synopsis
Render chart templates locally and display the output.
Any values that would normally be looked up or retrieved in-cluster will be
faked locally. Additionally, none of the server-side testing of chart validity
(e.g. whether an API is supported) is done.
```
helm template [NAME] [CHART] [flags]
```
### Options
```
-a, --api-versions strings Kubernetes api versions used for Capabilities.APIVersions
--atomic if set, the installation process deletes the installation on failure. The --wait flag will be set automatically if --atomic is used
--ca-file string verify certificates of HTTPS-enabled servers using this CA bundle
--cert-file string identify HTTPS client using this SSL certificate file
--create-namespace create the release namespace if not present
--dependency-update update dependencies if they are missing before installing the chart
--description string add a custom description
--devel use development versions, too. Equivalent to version '>0.0.0-0'. If --version is set, this is ignored
--disable-openapi-validation if set, the installation process will not validate rendered templates against the Kubernetes OpenAPI Schema
--dry-run string[="client"] simulate an install. If --dry-run is set with no option being specified or as '--dry-run=client', it will not attempt cluster connections. Setting '--dry-run=server' allows attempting cluster connections.
--enable-dns enable DNS lookups when rendering templates
--force force resource updates through a replacement strategy
-g, --generate-name generate the name (and omit the NAME parameter)
-h, --help help for template
--hide-notes if set, do not show notes in install output. Does not affect presence in chart metadata
--include-crds include CRDs in the templated output
--insecure-skip-tls-verify skip tls certificate checks for the chart download
--is-upgrade set .Release.IsUpgrade instead of .Release.IsInstall
--key-file string identify HTTPS client using this SSL key file
--keyring string location of public keys used for verification (default "~/.gnupg/pubring.gpg")
--kube-version string Kubernetes version used for Capabilities.KubeVersion
-l, --labels stringToString Labels that would be added to release metadata. Should be divided by comma. (default [])
--name-template string specify template used to name the release
--no-hooks prevent hooks from running during install
--output-dir string writes the executed templates to files in output-dir instead of stdout
--pass-credentials pass credentials to all domains
--password string chart repository password where to locate the requested chart
--plain-http use insecure HTTP connections for the chart download
--post-renderer postRendererString the path to an executable to be used for post rendering. If it exists in $PATH, the binary will be used, otherwise it will try to look for the executable at the given path
--post-renderer-args postRendererArgsSlice an argument to the post-renderer (can specify multiple) (default [])
--release-name use release name in the output-dir path.
--render-subchart-notes if set, render subchart notes along with the parent
--replace re-use the given name, only if that name is a deleted release which remains in the history. This is unsafe in production
--repo string chart repository url where to locate the requested chart
--set stringArray set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
--set-json stringArray set JSON values on the command line (can specify multiple or separate values with commas: key1=jsonval1,key2=jsonval2)
--set-literal stringArray set a literal STRING value on the command line
--set-string stringArray set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
-s, --show-only stringArray only show manifests rendered from the given templates
--skip-crds if set, no CRDs will be installed. By default, CRDs are installed if not already present
--skip-schema-validation if set, disables JSON schema validation
--skip-tests skip tests from templated output
--timeout duration time to wait for any individual Kubernetes operation (like Jobs for hooks) (default 5m0s)
--username string chart repository username where to locate the requested chart
--validate validate your manifests against the Kubernetes cluster you are currently pointing at. This is the same validation performed on an install
-f, --values strings specify values in a YAML file or a URL (can specify multiple)
--verify verify the package before using it
--version string specify a version constraint for the chart version to use. This constraint can be a specific tag (e.g. 1.1.1) or it may reference a valid range (e.g. ^2.0.0). If this is not specified, the latest version is used
--wait if set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout
--wait-for-jobs if set and --wait enabled, will wait until all Jobs have been completed before marking the release as successful. It will wait for as long as --timeout
```
### Options inherited from parent commands
```
--burst-limit int client-side default throttling limit (default 100)
--debug enable verbose output
--kube-apiserver string the address and the port for the Kubernetes API server
--kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--kube-as-user string username to impersonate for the operation
--kube-ca-file string the certificate authority file for the Kubernetes API server connection
--kube-context string name of the kubeconfig context to use
--kube-insecure-skip-tls-verify if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure
--kube-tls-server-name string server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used
--kube-token string bearer token used for authentication
--kubeconfig string path to the kubeconfig file
-n, --namespace string namespace scope for this request
--qps float32 queries per second used when communicating with the Kubernetes API, not including bursting
--registry-config string path to the registry config file (default "~/.config/helm/registry/config.json")
--repository-cache string path to the directory containing cached repository indexes (default "~/.cache/helm/repository")
--repository-config string path to the file containing repository names and URLs (default "~/.config/helm/repositories.yaml")
```
### SEE ALSO
* [helm](helm.md) - The Helm package manager for Kubernetes.
###### | helm | title Helm Template helm template locally render templates Synopsis Render chart templates locally and display the output Any values that would normally be looked up or retrieved in cluster will be faked locally Additionally none of the server side testing of chart validity e g whether an API is supported is done helm template NAME CHART flags Options a api versions strings Kubernetes api versions used for Capabilities APIVersions atomic if set the installation process deletes the installation on failure The wait flag will be set automatically if atomic is used ca file string verify certificates of HTTPS enabled servers using this CA bundle cert file string identify HTTPS client using this SSL certificate file create namespace create the release namespace if not present dependency update update dependencies if they are missing before installing the chart description string add a custom description devel use development versions too Equivalent to version 0 0 0 0 If version is set this is ignored disable openapi validation if set the installation process will not validate rendered templates against the Kubernetes OpenAPI Schema dry run string client simulate an install If dry run is set with no option being specified or as dry run client it will not attempt cluster connections Setting dry run server allows attempting cluster connections enable dns enable DNS lookups when rendering templates force force resource updates through a replacement strategy g generate name generate the name and omit the NAME parameter h help help for template hide notes if set do not show notes in install output Does not affect presence in chart metadata include crds include CRDs in the templated output insecure skip tls verify skip tls certificate checks for the chart download is upgrade set Release IsUpgrade instead of Release IsInstall key file string identify HTTPS client using this SSL key file keyring string location of public keys used for verification default gnupg pubring gpg kube version string Kubernetes version used for Capabilities KubeVersion l labels stringToString Labels that would be added to release metadata Should be divided by comma default name template string specify template used to name the release no hooks prevent hooks from running during install output dir string writes the executed templates to files in output dir instead of stdout pass credentials pass credentials to all domains password string chart repository password where to locate the requested chart plain http use insecure HTTP connections for the chart download post renderer postRendererString the path to an executable to be used for post rendering If it exists in PATH the binary will be used otherwise it will try to look for the executable at the given path post renderer args postRendererArgsSlice an argument to the post renderer can specify multiple default release name use release name in the output dir path render subchart notes if set render subchart notes along with the parent replace re use the given name only if that name is a deleted release which remains in the history This is unsafe in production repo string chart repository url where to locate the requested chart set stringArray set values on the command line can specify multiple or separate values with commas key1 val1 key2 val2 set file stringArray set values from respective files specified via the command line can specify multiple or separate values with commas key1 path1 key2 path2 set json stringArray set JSON values on the command line can specify multiple or separate values with commas key1 jsonval1 key2 jsonval2 set literal stringArray set a literal STRING value on the command line set string stringArray set STRING values on the command line can specify multiple or separate values with commas key1 val1 key2 val2 s show only stringArray only show manifests rendered from the given templates skip crds if set no CRDs will be installed By default CRDs are installed if not already present skip schema validation if set disables JSON schema validation skip tests skip tests from templated output timeout duration time to wait for any individual Kubernetes operation like Jobs for hooks default 5m0s username string chart repository username where to locate the requested chart validate validate your manifests against the Kubernetes cluster you are currently pointing at This is the same validation performed on an install f values strings specify values in a YAML file or a URL can specify multiple verify verify the package before using it version string specify a version constraint for the chart version to use This constraint can be a specific tag e g 1 1 1 or it may reference a valid range e g 2 0 0 If this is not specified the latest version is used wait if set will wait until all Pods PVCs Services and minimum number of Pods of a Deployment StatefulSet or ReplicaSet are in a ready state before marking the release as successful It will wait for as long as timeout wait for jobs if set and wait enabled will wait until all Jobs have been completed before marking the release as successful It will wait for as long as timeout Options inherited from parent commands burst limit int client side default throttling limit default 100 debug enable verbose output kube apiserver string the address and the port for the Kubernetes API server kube as group stringArray group to impersonate for the operation this flag can be repeated to specify multiple groups kube as user string username to impersonate for the operation kube ca file string the certificate authority file for the Kubernetes API server connection kube context string name of the kubeconfig context to use kube insecure skip tls verify if true the Kubernetes API server s certificate will not be checked for validity This will make your HTTPS connections insecure kube tls server name string server name to use for Kubernetes API server certificate validation If it is not provided the hostname used to contact the server is used kube token string bearer token used for authentication kubeconfig string path to the kubeconfig file n namespace string namespace scope for this request qps float32 queries per second used when communicating with the Kubernetes API not including bursting registry config string path to the registry config file default config helm registry config json repository cache string path to the directory containing cached repository indexes default cache helm repository repository config string path to the file containing repository names and URLs default config helm repositories yaml SEE ALSO helm helm md The Helm package manager for Kubernetes |
helm Captures information about using Helm in specific Kubernetes environments title Kubernetes Distribution Guide Kubernetes https github com cncf k8s conformance whether weight 10 or not aliases docs kubernetesdistros Helm should work with any conformant version of | ---
title: "Kubernetes Distribution Guide"
description: "Captures information about using Helm in specific Kubernetes environments."
aliases: ["/docs/kubernetes_distros/"]
weight: 10
---
Helm should work with any [conformant version of
Kubernetes](https://github.com/cncf/k8s-conformance) (whether
[certified](https://www.cncf.io/certification/software-conformance/) or not).
This document captures information about using Helm in specific Kubernetes
environments. Please contribute more details about any distros (sorted
alphabetically) if desired.
## AKS
Helm works with [Azure Kubernetes
Service](https://docs.microsoft.com/en-us/azure/aks/kubernetes-helm).
## DC/OS
Helm has been tested and is working on Mesospheres DC/OS 1.11 Kubernetes
platform, and requires no additional configuration.
## EKS
Helm works with Amazon Elastic Kubernetes Service (Amazon EKS):
[Using Helm with Amazon
EKS](https://docs.aws.amazon.com/eks/latest/userguide/helm.html).
## GKE
Google's GKE hosted Kubernetes platform is known to work with Helm, and requires
no additional configuration.
## `scripts/local-cluster` and Hyperkube
Hyperkube configured via `scripts/local-cluster.sh` is known to work. For raw
Hyperkube you may need to do some manual configuration.
## IKS
Helm works with [IBM Cloud Kubernetes
Service](https://cloud.ibm.com/docs/containers?topic=containers-helm).
## KIND (Kubernetes IN Docker)
Helm is regularly tested on [KIND](https://github.com/kubernetes-sigs/kind).
## KubeOne
Helm works in clusters that are set up by KubeOne without caveats.
## Kubermatic
Helm works in user clusters that are created by Kubermatic without caveats.
Since seed cluster can be set up in different ways Helm support depends on their
configuration.
## MicroK8s
Helm can be enabled in [MicroK8s](https://microk8s.io) using the command:
`microk8s.enable helm3`
## Minikube
Helm is tested and known to work with
[Minikube](https://github.com/kubernetes/minikube). It requires no additional
configuration.
## Openshift
Helm works straightforward on OpenShift Online, OpenShift Dedicated, OpenShift
Container Platform (version >= 3.6) or OpenShift Origin (version >= 3.6). To
learn more read [this
blog](https://blog.openshift.com/getting-started-helm-openshift/) post.
## Platform9
Helm is pre-installed with [Platform9 Managed
Kubernetes](https://platform9.com/managed-kubernetes/?utm_source=helm_distro_notes).
Platform9 provides access to all official Helm charts through the App Catalog UI
and native Kubernetes CLI. Additional repositories can be manually added.
Further details are available in this [Platform9 App Catalog
article](https://platform9.com/support/deploying-kubernetes-apps-platform9-managed-kubernetes/?utm_source=helm_distro_notes).
## Ubuntu with `kubeadm`
Kubernetes bootstrapped with `kubeadm` is known to work on the following Linux
distributions:
- Ubuntu 16.04
- Fedora release 25
Some versions of Helm (v2.0.0-beta2) require you to `export
KUBECONFIG=/etc/kubernetes/admin.conf` or create a `~/.kube/config`.
## VMware Tanzu Kubernetes Grid
Helm runs on VMware Tanzu Kubernetes Grid, TKG, without needing configuration changes.
The Tanzu CLI can manage installing packages for [helm-controller](https://fluxcd.io/flux/components/helm/) allowing for declaratively managing Helm chart releases.
Further details available in the TKG documentation for [CLI-Managed Packages](https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.6/vmware-tanzu-kubernetes-grid-16/GUID-packages-user-managed-index.html#package-locations-and-dependencies-5). | helm | title Kubernetes Distribution Guide description Captures information about using Helm in specific Kubernetes environments aliases docs kubernetes distros weight 10 Helm should work with any conformant version of Kubernetes https github com cncf k8s conformance whether certified https www cncf io certification software conformance or not This document captures information about using Helm in specific Kubernetes environments Please contribute more details about any distros sorted alphabetically if desired AKS Helm works with Azure Kubernetes Service https docs microsoft com en us azure aks kubernetes helm DC OS Helm has been tested and is working on Mesospheres DC OS 1 11 Kubernetes platform and requires no additional configuration EKS Helm works with Amazon Elastic Kubernetes Service Amazon EKS Using Helm with Amazon EKS https docs aws amazon com eks latest userguide helm html GKE Google s GKE hosted Kubernetes platform is known to work with Helm and requires no additional configuration scripts local cluster and Hyperkube Hyperkube configured via scripts local cluster sh is known to work For raw Hyperkube you may need to do some manual configuration IKS Helm works with IBM Cloud Kubernetes Service https cloud ibm com docs containers topic containers helm KIND Kubernetes IN Docker Helm is regularly tested on KIND https github com kubernetes sigs kind KubeOne Helm works in clusters that are set up by KubeOne without caveats Kubermatic Helm works in user clusters that are created by Kubermatic without caveats Since seed cluster can be set up in different ways Helm support depends on their configuration MicroK8s Helm can be enabled in MicroK8s https microk8s io using the command microk8s enable helm3 Minikube Helm is tested and known to work with Minikube https github com kubernetes minikube It requires no additional configuration Openshift Helm works straightforward on OpenShift Online OpenShift Dedicated OpenShift Container Platform version 3 6 or OpenShift Origin version 3 6 To learn more read this blog https blog openshift com getting started helm openshift post Platform9 Helm is pre installed with Platform9 Managed Kubernetes https platform9 com managed kubernetes utm source helm distro notes Platform9 provides access to all official Helm charts through the App Catalog UI and native Kubernetes CLI Additional repositories can be manually added Further details are available in this Platform9 App Catalog article https platform9 com support deploying kubernetes apps platform9 managed kubernetes utm source helm distro notes Ubuntu with kubeadm Kubernetes bootstrapped with kubeadm is known to work on the following Linux distributions Ubuntu 16 04 Fedora release 25 Some versions of Helm v2 0 0 beta2 require you to export KUBECONFIG etc kubernetes admin conf or create a kube config VMware Tanzu Kubernetes Grid Helm runs on VMware Tanzu Kubernetes Grid TKG without needing configuration changes The Tanzu CLI can manage installing packages for helm controller https fluxcd io flux components helm allowing for declaratively managing Helm chart releases Further details available in the TKG documentation for CLI Managed Packages https docs vmware com en VMware Tanzu Kubernetes Grid 1 6 vmware tanzu kubernetes grid 16 GUID packages user managed index html package locations and dependencies 5 |
helm aliases docs chartshooks Describes how to work with chart hooks certain points in a release s life cycle For example you can use hooks to weight 2 Helm provides a hook mechanism to allow chart developers to intervene at title Chart Hooks | ---
title: "Chart Hooks"
description: "Describes how to work with chart hooks."
aliases: ["/docs/charts_hooks/"]
weight: 2
---
Helm provides a _hook_ mechanism to allow chart developers to intervene at
certain points in a release's life cycle. For example, you can use hooks to:
- Load a ConfigMap or Secret during install before any other charts are loaded.
- Execute a Job to back up a database before installing a new chart, and then
execute a second job after the upgrade in order to restore data.
- Run a Job before deleting a release to gracefully take a service out of
rotation before removing it.
Hooks work like regular templates, but they have special annotations that cause
Helm to utilize them differently. In this section, we cover the basic usage
pattern for hooks.
## The Available Hooks
The following hooks are defined:
| Annotation Value | Description |
| ---------------- | ----------------------------------------------------------------------------------------------------- |
| `pre-install` | Executes after templates are rendered, but before any resources are created in Kubernetes |
| `post-install` | Executes after all resources are loaded into Kubernetes |
| `pre-delete` | Executes on a deletion request before any resources are deleted from Kubernetes |
| `post-delete` | Executes on a deletion request after all of the release's resources have been deleted |
| `pre-upgrade` | Executes on an upgrade request after templates are rendered, but before any resources are updated |
| `post-upgrade` | Executes on an upgrade request after all resources have been upgraded |
| `pre-rollback` | Executes on a rollback request after templates are rendered, but before any resources are rolled back |
| `post-rollback` | Executes on a rollback request after all resources have been modified |
| `test` | Executes when the Helm test subcommand is invoked ([view test docs](/docs/chart_tests/)) |
_Note that the `crd-install` hook has been removed in favor of the `crds/`
directory in Helm 3._
## Hooks and the Release Lifecycle
Hooks allow you, the chart developer, an opportunity to perform operations at
strategic points in a release lifecycle. For example, consider the lifecycle for
a `helm install`. By default, the lifecycle looks like this:
1. User runs `helm install foo`
2. The Helm library install API is called
3. After some verification, the library renders the `foo` templates
4. The library loads the resulting resources into Kubernetes
5. The library returns the release object (and other data) to the client
6. The client exits
Helm defines two hooks for the `install` lifecycle: `pre-install` and
`post-install`. If the developer of the `foo` chart implements both hooks, the
lifecycle is altered like this:
1. User runs `helm install foo`
2. The Helm library install API is called
3. CRDs in the `crds/` directory are installed
4. After some verification, the library renders the `foo` templates
5. The library prepares to execute the `pre-install` hooks (loading hook
resources into Kubernetes)
6. The library sorts hooks by weight (assigning a weight of 0 by default),
by resource kind and finally by name in ascending order.
7. The library then loads the hook with the lowest weight first (negative to
positive)
8. The library waits until the hook is "Ready" (except for CRDs)
9. The library loads the resulting resources into Kubernetes. Note that if the
`--wait` flag is set, the library will wait until all resources are in a
ready state and will not run the `post-install` hook until they are ready.
10. The library executes the `post-install` hook (loading hook resources)
11. The library waits until the hook is "Ready"
12. The library returns the release object (and other data) to the client
13. The client exits
What does it mean to wait until a hook is ready? This depends on the resource
declared in the hook. If the resource is a `Job` or `Pod` kind, Helm will wait
until it successfully runs to completion. And if the hook fails, the release
will fail. This is a _blocking operation_, so the Helm client will pause while
the Job is run.
For all other kinds, as soon as Kubernetes marks the resource as loaded (added
or updated), the resource is considered "Ready". When many resources are
declared in a hook, the resources are executed serially. If they have hook
weights (see below), they are executed in weighted order.
Starting from Helm 3.2.0 hook resources with same weight are installed in the same
order as normal non-hook resources. Otherwise, ordering is
not guaranteed. (In Helm 2.3.0 and after, they are sorted alphabetically. That
behavior, though, is not considered binding and could change in the future.) It
is considered good practice to add a hook weight, and set it to `0` if weight is
not important.
### Hook resources are not managed with corresponding releases
The resources that a hook creates are currently not tracked or managed as part
of the release. Once Helm verifies that the hook has reached its ready state, it
will leave the hook resource alone. Garbage collection of hook resources when
the corresponding release is deleted may be added to Helm 3 in the future, so
any hook resources that must never be deleted should be annotated with
`helm.sh/resource-policy: keep`.
Practically speaking, this means that if you create resources in a hook, you
cannot rely upon `helm uninstall` to remove the resources. To destroy such
resources, you need to either [add a custom `helm.sh/hook-delete-policy`
annotation](#hook-deletion-policies) to the hook template file, or [set the time
to live (TTL) field of a Job
resource](https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/).
## Writing a Hook
Hooks are just Kubernetes manifest files with special annotations in the
`metadata` section. Because they are template files, you can use all of the
normal template features, including reading `.Values`, `.Release`, and
`.Template`.
For example, this template, stored in `templates/post-install-job.yaml`,
declares a job to be run on `post-install`:
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ""
labels:
app.kubernetes.io/managed-by:
app.kubernetes.io/instance:
app.kubernetes.io/version:
helm.sh/chart: "-"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: ""
labels:
app.kubernetes.io/managed-by:
app.kubernetes.io/instance:
helm.sh/chart: "-"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep",""]
```
What makes this template a hook is the annotation:
```yaml
annotations:
"helm.sh/hook": post-install
```
One resource can implement multiple hooks:
```yaml
annotations:
"helm.sh/hook": post-install,post-upgrade
```
Similarly, there is no limit to the number of different resources that may
implement a given hook. For example, one could declare both a secret and a
config map as a pre-install hook.
When subcharts declare hooks, those are also evaluated. There is no way for a
top-level chart to disable the hooks declared by subcharts.
It is possible to define a weight for a hook which will help build a
deterministic executing order. Weights are defined using the following
annotation:
```yaml
annotations:
"helm.sh/hook-weight": "5"
```
Hook weights can be positive or negative numbers but must be represented as
strings. When Helm starts the execution cycle of hooks of a particular Kind it
will sort those hooks in ascending order.
### Hook deletion policies
It is possible to define policies that determine when to delete corresponding
hook resources. Hook deletion policies are defined using the following
annotation:
```yaml
annotations:
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
```
You can choose one or more defined annotation values:
| Annotation Value | Description |
| ---------------------- | -------------------------------------------------------------------- |
| `before-hook-creation` | Delete the previous resource before a new hook is launched (default) |
| `hook-succeeded` | Delete the resource after the hook is successfully executed |
| `hook-failed` | Delete the resource if the hook failed during execution |
If no hook deletion policy annotation is specified, the `before-hook-creation`
behavior applies by default. | helm | title Chart Hooks description Describes how to work with chart hooks aliases docs charts hooks weight 2 Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release s life cycle For example you can use hooks to Load a ConfigMap or Secret during install before any other charts are loaded Execute a Job to back up a database before installing a new chart and then execute a second job after the upgrade in order to restore data Run a Job before deleting a release to gracefully take a service out of rotation before removing it Hooks work like regular templates but they have special annotations that cause Helm to utilize them differently In this section we cover the basic usage pattern for hooks The Available Hooks The following hooks are defined Annotation Value Description pre install Executes after templates are rendered but before any resources are created in Kubernetes post install Executes after all resources are loaded into Kubernetes pre delete Executes on a deletion request before any resources are deleted from Kubernetes post delete Executes on a deletion request after all of the release s resources have been deleted pre upgrade Executes on an upgrade request after templates are rendered but before any resources are updated post upgrade Executes on an upgrade request after all resources have been upgraded pre rollback Executes on a rollback request after templates are rendered but before any resources are rolled back post rollback Executes on a rollback request after all resources have been modified test Executes when the Helm test subcommand is invoked view test docs docs chart tests Note that the crd install hook has been removed in favor of the crds directory in Helm 3 Hooks and the Release Lifecycle Hooks allow you the chart developer an opportunity to perform operations at strategic points in a release lifecycle For example consider the lifecycle for a helm install By default the lifecycle looks like this 1 User runs helm install foo 2 The Helm library install API is called 3 After some verification the library renders the foo templates 4 The library loads the resulting resources into Kubernetes 5 The library returns the release object and other data to the client 6 The client exits Helm defines two hooks for the install lifecycle pre install and post install If the developer of the foo chart implements both hooks the lifecycle is altered like this 1 User runs helm install foo 2 The Helm library install API is called 3 CRDs in the crds directory are installed 4 After some verification the library renders the foo templates 5 The library prepares to execute the pre install hooks loading hook resources into Kubernetes 6 The library sorts hooks by weight assigning a weight of 0 by default by resource kind and finally by name in ascending order 7 The library then loads the hook with the lowest weight first negative to positive 8 The library waits until the hook is Ready except for CRDs 9 The library loads the resulting resources into Kubernetes Note that if the wait flag is set the library will wait until all resources are in a ready state and will not run the post install hook until they are ready 10 The library executes the post install hook loading hook resources 11 The library waits until the hook is Ready 12 The library returns the release object and other data to the client 13 The client exits What does it mean to wait until a hook is ready This depends on the resource declared in the hook If the resource is a Job or Pod kind Helm will wait until it successfully runs to completion And if the hook fails the release will fail This is a blocking operation so the Helm client will pause while the Job is run For all other kinds as soon as Kubernetes marks the resource as loaded added or updated the resource is considered Ready When many resources are declared in a hook the resources are executed serially If they have hook weights see below they are executed in weighted order Starting from Helm 3 2 0 hook resources with same weight are installed in the same order as normal non hook resources Otherwise ordering is not guaranteed In Helm 2 3 0 and after they are sorted alphabetically That behavior though is not considered binding and could change in the future It is considered good practice to add a hook weight and set it to 0 if weight is not important Hook resources are not managed with corresponding releases The resources that a hook creates are currently not tracked or managed as part of the release Once Helm verifies that the hook has reached its ready state it will leave the hook resource alone Garbage collection of hook resources when the corresponding release is deleted may be added to Helm 3 in the future so any hook resources that must never be deleted should be annotated with helm sh resource policy keep Practically speaking this means that if you create resources in a hook you cannot rely upon helm uninstall to remove the resources To destroy such resources you need to either add a custom helm sh hook delete policy annotation hook deletion policies to the hook template file or set the time to live TTL field of a Job resource https kubernetes io docs concepts workloads controllers ttlafterfinished Writing a Hook Hooks are just Kubernetes manifest files with special annotations in the metadata section Because they are template files you can use all of the normal template features including reading Values Release and Template For example this template stored in templates post install job yaml declares a job to be run on post install yaml apiVersion batch v1 kind Job metadata name labels app kubernetes io managed by app kubernetes io instance app kubernetes io version helm sh chart annotations This is what defines this resource as a hook Without this line the job is considered part of the release helm sh hook post install helm sh hook weight 5 helm sh hook delete policy hook succeeded spec template metadata name labels app kubernetes io managed by app kubernetes io instance helm sh chart spec restartPolicy Never containers name post install job image alpine 3 3 command bin sleep What makes this template a hook is the annotation yaml annotations helm sh hook post install One resource can implement multiple hooks yaml annotations helm sh hook post install post upgrade Similarly there is no limit to the number of different resources that may implement a given hook For example one could declare both a secret and a config map as a pre install hook When subcharts declare hooks those are also evaluated There is no way for a top level chart to disable the hooks declared by subcharts It is possible to define a weight for a hook which will help build a deterministic executing order Weights are defined using the following annotation yaml annotations helm sh hook weight 5 Hook weights can be positive or negative numbers but must be represented as strings When Helm starts the execution cycle of hooks of a particular Kind it will sort those hooks in ascending order Hook deletion policies It is possible to define policies that determine when to delete corresponding hook resources Hook deletion policies are defined using the following annotation yaml annotations helm sh hook delete policy before hook creation hook succeeded You can choose one or more defined annotation values Annotation Value Description before hook creation Delete the previous resource before a new hook is launched default hook succeeded Delete the resource after the hook is successfully executed hook failed Delete the resource if the hook failed during execution If no hook deletion policy annotation is specified the before hook creation behavior applies by default |
helm weight 13 and managing releases in one or more clusters title Migrating Helm v2 to v3 This guide shows how to migrate Helm v2 to v3 Helm v2 needs to be installed Learn how to migrate Helm v2 to v3 Overview of Helm 3 Changes | ---
title: "Migrating Helm v2 to v3"
description: "Learn how to migrate Helm v2 to v3."
weight: 13
---
This guide shows how to migrate Helm v2 to v3. Helm v2 needs to be installed
and managing releases in one or more clusters.
## Overview of Helm 3 Changes
The full list of changes from Helm 2 to 3 are documented in the [FAQ
section](https://v3.helm.sh/docs/faq/#changes-since-helm-2). The following is a
summary of some of those changes that a user should be aware of before and
during migration:
1. Removal of Tiller:
- Replaces client/server with client/library architecture (`helm` binary
only)
- Security is now on per user basis (delegated to Kubernetes user cluster
security)
- Releases are now stored as in-cluster secrets and the release object
metadata has changed
- Releases are persisted on a release namespace basis and not in the Tiller
namespace anymore
2. Chart repository updated:
- `helm search` now supports both local repository searches and making search
queries against Artifact Hub
3. Chart apiVersion bumped to "v2" for following specification changes:
- Dynamically linked chart dependencies moved to `Chart.yaml`
(`requirements.yaml` removed and requirements --> dependencies)
- Library charts (helper/common charts) can now be added as dynamically
linked chart dependencies
- Charts have a `type` metadata field to define the chart to be of an
`application` or `library` chart. It is application by default which means
it is renderable and installable
- Helm 2 charts (apiVersion=v1) are still installable
4. XDG directory specification added:
- Helm home removed and replaced with XDG directory specification for storing
configuration files
- No longer need to initialize Helm
- `helm init` and `helm home` removed
5. Additional changes:
- Helm install/set-up is simplified:
- Helm client (helm binary) only (no Tiller)
- Run-as-is paradigm
- `local` or `stable` repositories are not set-up by default
- `crd-install` hook removed and replaced with `crds` directory in chart
where all CRDs defined in it will be installed before any rendering of the
chart
- `test-failure` hook annotation value removed, and `test-success`
deprecated. Use `test` instead
- Commands removed/replaced/added:
- delete --> uninstall : removes all release history by default
(previously needed `--purge`)
- fetch --> pull
- home (removed)
- init (removed)
- install: requires release name or `--generate-name` argument
- inspect --> show
- reset (removed)
- serve (removed)
- template: `-x`/`--execute` argument renamed to `-s`/`--show-only`
- upgrade: Added argument `--history-max` which limits the maximum number
of revisions saved per release (0 for no limit)
- Helm 3 Go library has undergone a lot of changes and is incompatible with
the Helm 2 library
- Release binaries are now hosted on `get.helm.sh`
## Migration Use Cases
The migration use cases are as follows:
1. Helm v2 and v3 managing the same cluster:
- This use case is only recommended if you intend to phase out Helm v2
gradually and do not require v3 to manage any releases deployed by v2. All
new releases being deployed should be performed by v3 and existing v2
deployed releases are updated/removed by v2 only
- Helm v2 and v3 can quite happily manage the same cluster. The Helm versions
can be installed on the same or separate systems
- If installing Helm v3 on the same system, you need to perform an additional
step to ensure that both client versions can co-exist until ready to remove
Helm v2 client. Rename or put the Helm v3 binary in a different folder to
avoid conflict
- Otherwise there are no conflicts between both versions because of the
following distinctions:
- v2 and v3 release (history) storage are independent of each other. The
changes include the Kubernetes resource for storage and the release
object metadata contained in the resource. Releases will also be on a per
user namespace instead of using the Tiller namespace (for example, v2
default Tiller namespace kube-system). v2 uses "ConfigMaps" or "Secrets"
under the Tiller namespace and `TILLER`ownership. v3 uses "Secrets" in
the user namespace and `helm` ownership. Releases are incremental in both
v2 and v3
- The only issue could be if Kubernetes cluster scoped resources (e.g.
`clusterroles.rbac`) are defined in a chart. The v3 deployment would then
fail even if unique in the namespace as the resources would clash
- v3 configuration no longer uses `$HELM_HOME` and uses XDG directory
specification instead. It is also created on the fly as need be. It is
therefore independent of v2 configuration. This is applicable only when
both versions are installed on the same system
2. Migrating Helm v2 to Helm v3:
- This use case applies when you want Helm v3 to manage existing Helm v2
releases
- It should be noted that a Helm v2 client:
- can manage 1 to many Kubernetes clusters
- can connect to 1 to many Tiller instances for a cluster
- This means that you have to be aware of this when migrating as releases
are deployed into clusters by Tiller and its namespace. You have to
therefore be aware of migrating for each cluster and each Tiller instance
that is managed by the Helm v2 client instance
- The recommended data migration path is as follows:
1. Backup v2 data
2. Migrate Helm v2 configuration
3. Migrate Helm v2 releases
4. When confident that Helm v3 is managing all Helm v2 data (for all
clusters and Tiller instances of the Helm v2 client instance) as
expected, then clean up Helm v2 data
- The migration process is automated by the Helm v3
[2to3](https://github.com/helm/helm-2to3) plugin
## Reference
- Helm v3 [2to3](https://github.com/helm/helm-2to3) plugin
- Blog [post](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/)
explaining `2to3` plugin usage with examples | helm | title Migrating Helm v2 to v3 description Learn how to migrate Helm v2 to v3 weight 13 This guide shows how to migrate Helm v2 to v3 Helm v2 needs to be installed and managing releases in one or more clusters Overview of Helm 3 Changes The full list of changes from Helm 2 to 3 are documented in the FAQ section https v3 helm sh docs faq changes since helm 2 The following is a summary of some of those changes that a user should be aware of before and during migration 1 Removal of Tiller Replaces client server with client library architecture helm binary only Security is now on per user basis delegated to Kubernetes user cluster security Releases are now stored as in cluster secrets and the release object metadata has changed Releases are persisted on a release namespace basis and not in the Tiller namespace anymore 2 Chart repository updated helm search now supports both local repository searches and making search queries against Artifact Hub 3 Chart apiVersion bumped to v2 for following specification changes Dynamically linked chart dependencies moved to Chart yaml requirements yaml removed and requirements dependencies Library charts helper common charts can now be added as dynamically linked chart dependencies Charts have a type metadata field to define the chart to be of an application or library chart It is application by default which means it is renderable and installable Helm 2 charts apiVersion v1 are still installable 4 XDG directory specification added Helm home removed and replaced with XDG directory specification for storing configuration files No longer need to initialize Helm helm init and helm home removed 5 Additional changes Helm install set up is simplified Helm client helm binary only no Tiller Run as is paradigm local or stable repositories are not set up by default crd install hook removed and replaced with crds directory in chart where all CRDs defined in it will be installed before any rendering of the chart test failure hook annotation value removed and test success deprecated Use test instead Commands removed replaced added delete uninstall removes all release history by default previously needed purge fetch pull home removed init removed install requires release name or generate name argument inspect show reset removed serve removed template x execute argument renamed to s show only upgrade Added argument history max which limits the maximum number of revisions saved per release 0 for no limit Helm 3 Go library has undergone a lot of changes and is incompatible with the Helm 2 library Release binaries are now hosted on get helm sh Migration Use Cases The migration use cases are as follows 1 Helm v2 and v3 managing the same cluster This use case is only recommended if you intend to phase out Helm v2 gradually and do not require v3 to manage any releases deployed by v2 All new releases being deployed should be performed by v3 and existing v2 deployed releases are updated removed by v2 only Helm v2 and v3 can quite happily manage the same cluster The Helm versions can be installed on the same or separate systems If installing Helm v3 on the same system you need to perform an additional step to ensure that both client versions can co exist until ready to remove Helm v2 client Rename or put the Helm v3 binary in a different folder to avoid conflict Otherwise there are no conflicts between both versions because of the following distinctions v2 and v3 release history storage are independent of each other The changes include the Kubernetes resource for storage and the release object metadata contained in the resource Releases will also be on a per user namespace instead of using the Tiller namespace for example v2 default Tiller namespace kube system v2 uses ConfigMaps or Secrets under the Tiller namespace and TILLER ownership v3 uses Secrets in the user namespace and helm ownership Releases are incremental in both v2 and v3 The only issue could be if Kubernetes cluster scoped resources e g clusterroles rbac are defined in a chart The v3 deployment would then fail even if unique in the namespace as the resources would clash v3 configuration no longer uses HELM HOME and uses XDG directory specification instead It is also created on the fly as need be It is therefore independent of v2 configuration This is applicable only when both versions are installed on the same system 2 Migrating Helm v2 to Helm v3 This use case applies when you want Helm v3 to manage existing Helm v2 releases It should be noted that a Helm v2 client can manage 1 to many Kubernetes clusters can connect to 1 to many Tiller instances for a cluster This means that you have to be aware of this when migrating as releases are deployed into clusters by Tiller and its namespace You have to therefore be aware of migrating for each cluster and each Tiller instance that is managed by the Helm v2 client instance The recommended data migration path is as follows 1 Backup v2 data 2 Migrate Helm v2 configuration 3 Migrate Helm v2 releases 4 When confident that Helm v3 is managing all Helm v2 data for all clusters and Tiller instances of the Helm v2 client instance as expected then clean up Helm v2 data The migration process is automated by the Helm v3 2to3 https github com helm helm 2to3 plugin Reference Helm v3 2to3 https github com helm helm 2to3 plugin Blog post https helm sh blog migrate from helm v2 to helm v3 explaining 2to3 plugin usage with examples |
helm title Chart Tests your chart works as expected when it is installed These tests also help the aliases docs charttests together As a chart author you may want to write some tests that validate that A chart contains a number of Kubernetes resources and components that work weight 3 Describes how to run and test your charts | ---
title: "Chart Tests"
description: "Describes how to run and test your charts."
aliases: ["/docs/chart_tests/"]
weight: 3
---
A chart contains a number of Kubernetes resources and components that work
together. As a chart author, you may want to write some tests that validate that
your chart works as expected when it is installed. These tests also help the
chart consumer understand what your chart is supposed to do.
A **test** in a helm chart lives under the `templates/` directory and is a job
definition that specifies a container with a given command to run. The container
should exit successfully (exit 0) for a test to be considered a success. The job
definition must contain the helm test hook annotation: `helm.sh/hook: test`.
Note that until Helm v3, the job definition needed to contain one of these helm
test hook annotations: `helm.sh/hook: test-success` or `helm.sh/hook: test-failure`.
`helm.sh/hook: test-success` is still accepted as a backwards-compatible
alternative to `helm.sh/hook: test`.
Example tests:
- Validate that your configuration from the values.yaml file was properly
injected.
- Make sure your username and password work correctly
- Make sure an incorrect username and password does not work
- Assert that your services are up and correctly load balancing
- etc.
You can run the pre-defined tests in Helm on a release using the command `helm
test <RELEASE_NAME>`. For a chart consumer, this is a great way to check that
their release of a chart (or application) works as expected.
## Example Test
The [helm create](/docs/helm/helm_create) command will automatically create a number of folders and files. To try the helm test functionality, first create a demo helm chart.
```console
$ helm create demo
```
You will now be able to see the following structure in your demo helm chart.
```
demo/
Chart.yaml
values.yaml
charts/
templates/
templates/tests/test-connection.yaml
```
In `demo/templates/tests/test-connection.yaml` you'll see a test you can try. You can see the helm test pod definition here:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: "-test-connection"
labels:
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: [':']
restartPolicy: Never
```
## Steps to Run a Test Suite on a Release
First, install the chart on your cluster to create a release. You may have to
wait for all pods to become active; if you test immediately after this install,
it is likely to show a transitive failure, and you will want to re-test.
```console
$ helm install demo demo --namespace default
$ helm test demo
NAME: demo
LAST DEPLOYED: Mon Feb 14 20:03:16 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: demo-test-connection
Last Started: Mon Feb 14 20:35:19 2022
Last Completed: Mon Feb 14 20:35:23 2022
Phase: Succeeded
[...]
```
## Notes
- You can define as many tests as you would like in a single yaml file or spread
across several yaml files in the `templates/` directory.
- You are welcome to nest your test suite under a `tests/` directory like
`<chart-name>/templates/tests/` for more isolation.
- A test is a [Helm hook](/docs/charts_hooks/), so annotations like
`helm.sh/hook-weight` and `helm.sh/hook-delete-policy` may be used with test
resources. | helm | title Chart Tests description Describes how to run and test your charts aliases docs chart tests weight 3 A chart contains a number of Kubernetes resources and components that work together As a chart author you may want to write some tests that validate that your chart works as expected when it is installed These tests also help the chart consumer understand what your chart is supposed to do A test in a helm chart lives under the templates directory and is a job definition that specifies a container with a given command to run The container should exit successfully exit 0 for a test to be considered a success The job definition must contain the helm test hook annotation helm sh hook test Note that until Helm v3 the job definition needed to contain one of these helm test hook annotations helm sh hook test success or helm sh hook test failure helm sh hook test success is still accepted as a backwards compatible alternative to helm sh hook test Example tests Validate that your configuration from the values yaml file was properly injected Make sure your username and password work correctly Make sure an incorrect username and password does not work Assert that your services are up and correctly load balancing etc You can run the pre defined tests in Helm on a release using the command helm test RELEASE NAME For a chart consumer this is a great way to check that their release of a chart or application works as expected Example Test The helm create docs helm helm create command will automatically create a number of folders and files To try the helm test functionality first create a demo helm chart console helm create demo You will now be able to see the following structure in your demo helm chart demo Chart yaml values yaml charts templates templates tests test connection yaml In demo templates tests test connection yaml you ll see a test you can try You can see the helm test pod definition here yaml apiVersion v1 kind Pod metadata name test connection labels annotations helm sh hook test spec containers name wget image busybox command wget args restartPolicy Never Steps to Run a Test Suite on a Release First install the chart on your cluster to create a release You may have to wait for all pods to become active if you test immediately after this install it is likely to show a transitive failure and you will want to re test console helm install demo demo namespace default helm test demo NAME demo LAST DEPLOYED Mon Feb 14 20 03 16 2022 NAMESPACE default STATUS deployed REVISION 1 TEST SUITE demo test connection Last Started Mon Feb 14 20 35 19 2022 Last Completed Mon Feb 14 20 35 23 2022 Phase Succeeded Notes You can define as many tests as you would like in a single yaml file or spread across several yaml files in the templates directory You are welcome to nest your test suite under a tests directory like chart name templates tests for more isolation A test is a Helm hook docs charts hooks so annotations like helm sh hook weight and helm sh hook delete policy may be used with test resources |
helm aliases docs registries weight 7 title Use OCI based registries Beginning in Helm 3 you can use container registries with support to store and share chart packages Beginning in Helm v3 8 0 OCI support is enabled by default Describes how to use OCI for Chart distribution | ---
title: "Use OCI-based registries"
description: "Describes how to use OCI for Chart distribution."
aliases: ["/docs/registries/"]
weight: 7
---
Beginning in Helm 3, you can use container registries with [OCI](https://www.opencontainers.org/) support to store and share chart packages. Beginning in Helm v3.8.0, OCI support is enabled by default.
## OCI support prior to v3.8.0
OCI support graduated from experimental to general availability with Helm v3.8.0. In prior versions of Helm, OCI support behaved differently. If you were using OCI support prior to Helm v3.8.0, its important to understand what has changed with different versions of Helm.
### Enabling OCI support prior to v3.8.0
Prior to Helm v3.8.0, OCI support is *experimental* and must be enabled.
To enable OCI experimental support for Helm versions prior to v3.8.0, set `HELM_EXPERIMENTAL_OCI` in your environment. For example:
```console
export HELM_EXPERIMENTAL_OCI=1
```
### OCI feature deprecation and behavior changes with v3.8.0
The release of [Helm v3.8.0](https://github.com/helm/helm/releases/tag/v3.8.0), the following features and behaviors are different from previous versions of Helm:
- When setting a chart in the dependencies as OCI, the version can be set to a range like other dependencies.
- SemVer tags that include build information can be pushed and used. OCI registries don't support `+` as a tag character. Helm translates the `+` to `_` when stored as a tag.
- The `helm registry login` command now follows the same structure as the Docker CLI for storing credentials. The same location for registry configuration can be passed to both Helm and the Docker CLI.
### OCI feature deprecation and behavior changes with v3.7.0
The release of [Helm v3.7.0](https://github.com/helm/helm/releases/tag/v3.7.0) included the implementation of [HIP 6](https://github.com/helm/community/blob/main/hips/hip-0006.md) for OCI support. As a result, the following features and behaviors are different from previous versions of Helm:
- The `helm chart` subcommand has been removed.
- The chart cache has been removed (no `helm chart list` etc.).
- OCI registry references are now always prefixed with `oci://`.
- The basename of the registry reference must *always* match the chart's name.
- The tag of the registry reference must *always* match the chart's semantic version (i.e. no `latest` tags).
- The chart layer media type was switched from `application/tar+gzip` to `application/vnd.cncf.helm.chart.content.v1.tar+gzip`.
## Using an OCI-based registry
### Helm repositories in OCI-based registries
A [Helm repository]() is a way to house and distribute packaged Helm charts. An OCI-based registry can contain zero or more Helm repositories and each of those repositories can contain zero or more packaged Helm charts.
### Use hosted registries
There are several hosted container registries with OCI support that you can use for your Helm charts. For example:
- [Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html)
- [Azure Container Registry](https://docs.microsoft.com/azure/container-registry/container-registry-helm-repos#push-chart-to-registry-as-oci-artifact)
- [Docker Hub](https://docs.docker.com/docker-hub/oci-artifacts/)
- [Google Artifact Registry](https://cloud.google.com/artifact-registry/docs/helm/manage-charts)
- [Harbor](https://goharbor.io/docs/main/administration/user-defined-oci-artifact/)
- [IBM Cloud Container Registry](https://cloud.ibm.com/docs/Registry?topic=Registry-registry_helm_charts)
- [JFrog Artifactory](https://jfrog.com/help/r/jfrog-artifactory-documentation/helm-oci-repositories)
Follow the hosted container registry provider's documentation to create and configure a registry with OCI support.
**Note:** You can run [Docker Registry](https://docs.docker.com/registry/deploying/) or [`zot`](https://github.com/project-zot/zot), which are OCI-based registries, on your development computer. Running an OCI-based registry on your development computer should only be used for testing purposes.
### Using sigstore to sign OCI-based charts
The [`helm-sigstore`](https://github.com/sigstore/helm-sigstore) plugin allows using [Sigstore](https://sigstore.dev/) to sign Helm charts with the same tools used to sign container images. This provides an alternative to the [GPG-based provenance]() supported by classic [chart repositories]().
For more details on using the `helm sigstore` plugin, see [that project's documentation](https://github.com/sigstore/helm-sigstore/blob/main/USAGE.md).
## Commands for working with registries
### The `registry` subcommand
#### `login`
login to a registry (with manual password entry)
```console
$ helm registry login -u myuser localhost:5000
Password:
Login succeeded
```
#### `logout`
logout from a registry
```console
$ helm registry logout localhost:5000
Logout succeeded
```
### The `push` subcommand
Upload a chart to an OCI-based registry:
```console
$ helm push mychart-0.1.0.tgz oci://localhost:5000/helm-charts
Pushed: localhost:5000/helm-charts/mychart:0.1.0
Digest: sha256:ec5f08ee7be8b557cd1fc5ae1a0ac985e8538da7c93f51a51eff4b277509a723
```
The `push` subcommand can only be used against `.tgz` files
created ahead of time using `helm package`.
When using `helm push` to upload a chart an OCI registry, the reference
must be prefixed with `oci://` and must not contain the basename or tag.
The registry reference basename is inferred from the chart's name,
and the tag is inferred from the chart's semantic version. This is
currently a strict requirement.
Certain registries require the repository and/or namespace (if specified)
to be created beforehand. Otherwise, an error will be produced during the
`helm push` operation.
If you have created a [provenance file]() (`.prov`), and it is present next to the chart `.tgz` file, it will
automatically be uploaded to the registry upon `push`. This results in
an extra layer on [the Helm chart manifest](#helm-chart-manifest).
Users of the [helm-push plugin](https://github.com/chartmuseum/helm-push) (for uploading charts to [ChartMuseum](#chartmuseum-repository-server))
may experience issues, since the plugin conflicts with the new, built-in `push`.
As of version v0.10.0, the plugin has been renamed to `cm-push`.
### Other subcommands
Support for the `oci://` protocol is also available in various other subcommands.
Here is a complete list:
- `helm pull`
- `helm show `
- `helm template`
- `helm install`
- `helm upgrade`
The basename (chart name) of the registry reference *is*
included for any type of action involving chart download
(vs. `helm push` where it is omitted).
Here are a few examples of using the subcommands listed above against
OCI-based charts:
```
$ helm pull oci://localhost:5000/helm-charts/mychart --version 0.1.0
Pulled: localhost:5000/helm-charts/mychart:0.1.0
Digest: sha256:0be7ec9fb7b962b46d81e4bb74fdcdb7089d965d3baca9f85d64948b05b402ff
$ helm show all oci://localhost:5000/helm-charts/mychart --version 0.1.0
apiVersion: v2
appVersion: 1.16.0
description: A Helm chart for Kubernetes
name: mychart
...
$ helm template myrelease oci://localhost:5000/helm-charts/mychart --version 0.1.0
---
# Source: mychart/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
...
$ helm install myrelease oci://localhost:5000/helm-charts/mychart --version 0.1.0
NAME: myrelease
LAST DEPLOYED: Wed Oct 27 15:11:40 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
...
$ helm upgrade myrelease oci://localhost:5000/helm-charts/mychart --version 0.2.0
Release "myrelease" has been upgraded. Happy Helming!
NAME: myrelease
LAST DEPLOYED: Wed Oct 27 15:12:05 2021
NAMESPACE: default
STATUS: deployed
REVISION: 2
NOTES:
...
```
## Specifying dependencies
Dependencies of a chart can be pulled from a registry using the `dependency update` subcommand.
The `repository` for a given entry in `Chart.yaml` is specified as the registry reference without the basename:
```
dependencies:
- name: mychart
version: "2.7.0"
repository: "oci://localhost:5000/myrepo"
```
This will fetch `oci://localhost:5000/myrepo/mychart:2.7.0` when `dependency update` is executed.
## Helm chart manifest
Example Helm chart manifest as represented in a registry
(note the `mediaType` fields):
```json
{
"schemaVersion": 2,
"config": {
"mediaType": "application/vnd.cncf.helm.config.v1+json",
"digest": "sha256:8ec7c0f2f6860037c19b54c3cfbab48d9b4b21b485a93d87b64690fdb68c2111",
"size": 117
},
"layers": [
{
"mediaType": "application/vnd.cncf.helm.chart.content.v1.tar+gzip",
"digest": "sha256:1b251d38cfe948dfc0a5745b7af5ca574ecb61e52aed10b19039db39af6e1617",
"size": 2487
}
]
}
```
The following example contains a
[provenance file]()
(note the extra layer):
```json
{
"schemaVersion": 2,
"config": {
"mediaType": "application/vnd.cncf.helm.config.v1+json",
"digest": "sha256:8ec7c0f2f6860037c19b54c3cfbab48d9b4b21b485a93d87b64690fdb68c2111",
"size": 117
},
"layers": [
{
"mediaType": "application/vnd.cncf.helm.chart.content.v1.tar+gzip",
"digest": "sha256:1b251d38cfe948dfc0a5745b7af5ca574ecb61e52aed10b19039db39af6e1617",
"size": 2487
},
{
"mediaType": "application/vnd.cncf.helm.chart.provenance.v1.prov",
"digest": "sha256:3e207b409db364b595ba862cdc12be96dcdad8e36c59a03b7b3b61c946a5741a",
"size": 643
}
]
}
```
## Migrating from chart repos
Migrating from classic [chart repositories]()
(index.yaml-based repos) is as simple using `helm pull`, then using `helm push` to upload the resulting `.tgz` files to a registry.
| helm | title Use OCI based registries description Describes how to use OCI for Chart distribution aliases docs registries weight 7 Beginning in Helm 3 you can use container registries with OCI https www opencontainers org support to store and share chart packages Beginning in Helm v3 8 0 OCI support is enabled by default OCI support prior to v3 8 0 OCI support graduated from experimental to general availability with Helm v3 8 0 In prior versions of Helm OCI support behaved differently If you were using OCI support prior to Helm v3 8 0 its important to understand what has changed with different versions of Helm Enabling OCI support prior to v3 8 0 Prior to Helm v3 8 0 OCI support is experimental and must be enabled To enable OCI experimental support for Helm versions prior to v3 8 0 set HELM EXPERIMENTAL OCI in your environment For example console export HELM EXPERIMENTAL OCI 1 OCI feature deprecation and behavior changes with v3 8 0 The release of Helm v3 8 0 https github com helm helm releases tag v3 8 0 the following features and behaviors are different from previous versions of Helm When setting a chart in the dependencies as OCI the version can be set to a range like other dependencies SemVer tags that include build information can be pushed and used OCI registries don t support as a tag character Helm translates the to when stored as a tag The helm registry login command now follows the same structure as the Docker CLI for storing credentials The same location for registry configuration can be passed to both Helm and the Docker CLI OCI feature deprecation and behavior changes with v3 7 0 The release of Helm v3 7 0 https github com helm helm releases tag v3 7 0 included the implementation of HIP 6 https github com helm community blob main hips hip 0006 md for OCI support As a result the following features and behaviors are different from previous versions of Helm The helm chart subcommand has been removed The chart cache has been removed no helm chart list etc OCI registry references are now always prefixed with oci The basename of the registry reference must always match the chart s name The tag of the registry reference must always match the chart s semantic version i e no latest tags The chart layer media type was switched from application tar gzip to application vnd cncf helm chart content v1 tar gzip Using an OCI based registry Helm repositories in OCI based registries A Helm repository is a way to house and distribute packaged Helm charts An OCI based registry can contain zero or more Helm repositories and each of those repositories can contain zero or more packaged Helm charts Use hosted registries There are several hosted container registries with OCI support that you can use for your Helm charts For example Amazon ECR https docs aws amazon com AmazonECR latest userguide push oci artifact html Azure Container Registry https docs microsoft com azure container registry container registry helm repos push chart to registry as oci artifact Docker Hub https docs docker com docker hub oci artifacts Google Artifact Registry https cloud google com artifact registry docs helm manage charts Harbor https goharbor io docs main administration user defined oci artifact IBM Cloud Container Registry https cloud ibm com docs Registry topic Registry registry helm charts JFrog Artifactory https jfrog com help r jfrog artifactory documentation helm oci repositories Follow the hosted container registry provider s documentation to create and configure a registry with OCI support Note You can run Docker Registry https docs docker com registry deploying or zot https github com project zot zot which are OCI based registries on your development computer Running an OCI based registry on your development computer should only be used for testing purposes Using sigstore to sign OCI based charts The helm sigstore https github com sigstore helm sigstore plugin allows using Sigstore https sigstore dev to sign Helm charts with the same tools used to sign container images This provides an alternative to the GPG based provenance supported by classic chart repositories For more details on using the helm sigstore plugin see that project s documentation https github com sigstore helm sigstore blob main USAGE md Commands for working with registries The registry subcommand login login to a registry with manual password entry console helm registry login u myuser localhost 5000 Password Login succeeded logout logout from a registry console helm registry logout localhost 5000 Logout succeeded The push subcommand Upload a chart to an OCI based registry console helm push mychart 0 1 0 tgz oci localhost 5000 helm charts Pushed localhost 5000 helm charts mychart 0 1 0 Digest sha256 ec5f08ee7be8b557cd1fc5ae1a0ac985e8538da7c93f51a51eff4b277509a723 The push subcommand can only be used against tgz files created ahead of time using helm package When using helm push to upload a chart an OCI registry the reference must be prefixed with oci and must not contain the basename or tag The registry reference basename is inferred from the chart s name and the tag is inferred from the chart s semantic version This is currently a strict requirement Certain registries require the repository and or namespace if specified to be created beforehand Otherwise an error will be produced during the helm push operation If you have created a provenance file prov and it is present next to the chart tgz file it will automatically be uploaded to the registry upon push This results in an extra layer on the Helm chart manifest helm chart manifest Users of the helm push plugin https github com chartmuseum helm push for uploading charts to ChartMuseum chartmuseum repository server may experience issues since the plugin conflicts with the new built in push As of version v0 10 0 the plugin has been renamed to cm push Other subcommands Support for the oci protocol is also available in various other subcommands Here is a complete list helm pull helm show helm template helm install helm upgrade The basename chart name of the registry reference is included for any type of action involving chart download vs helm push where it is omitted Here are a few examples of using the subcommands listed above against OCI based charts helm pull oci localhost 5000 helm charts mychart version 0 1 0 Pulled localhost 5000 helm charts mychart 0 1 0 Digest sha256 0be7ec9fb7b962b46d81e4bb74fdcdb7089d965d3baca9f85d64948b05b402ff helm show all oci localhost 5000 helm charts mychart version 0 1 0 apiVersion v2 appVersion 1 16 0 description A Helm chart for Kubernetes name mychart helm template myrelease oci localhost 5000 helm charts mychart version 0 1 0 Source mychart templates serviceaccount yaml apiVersion v1 kind ServiceAccount helm install myrelease oci localhost 5000 helm charts mychart version 0 1 0 NAME myrelease LAST DEPLOYED Wed Oct 27 15 11 40 2021 NAMESPACE default STATUS deployed REVISION 1 NOTES helm upgrade myrelease oci localhost 5000 helm charts mychart version 0 2 0 Release myrelease has been upgraded Happy Helming NAME myrelease LAST DEPLOYED Wed Oct 27 15 12 05 2021 NAMESPACE default STATUS deployed REVISION 2 NOTES Specifying dependencies Dependencies of a chart can be pulled from a registry using the dependency update subcommand The repository for a given entry in Chart yaml is specified as the registry reference without the basename dependencies name mychart version 2 7 0 repository oci localhost 5000 myrepo This will fetch oci localhost 5000 myrepo mychart 2 7 0 when dependency update is executed Helm chart manifest Example Helm chart manifest as represented in a registry note the mediaType fields json schemaVersion 2 config mediaType application vnd cncf helm config v1 json digest sha256 8ec7c0f2f6860037c19b54c3cfbab48d9b4b21b485a93d87b64690fdb68c2111 size 117 layers mediaType application vnd cncf helm chart content v1 tar gzip digest sha256 1b251d38cfe948dfc0a5745b7af5ca574ecb61e52aed10b19039db39af6e1617 size 2487 The following example contains a provenance file note the extra layer json schemaVersion 2 config mediaType application vnd cncf helm config v1 json digest sha256 8ec7c0f2f6860037c19b54c3cfbab48d9b4b21b485a93d87b64690fdb68c2111 size 117 layers mediaType application vnd cncf helm chart content v1 tar gzip digest sha256 1b251d38cfe948dfc0a5745b7af5ca574ecb61e52aed10b19039db39af6e1617 size 2487 mediaType application vnd cncf helm chart provenance v1 prov digest sha256 3e207b409db364b595ba862cdc12be96dcdad8e36c59a03b7b3b61c946a5741a size 643 Migrating from chart repos Migrating from classic chart repositories index yaml based repos is as simple using helm pull then using helm push to upload the resulting tgz files to a registry |
helm Helm has provenance tools which help chart users verify the integrity and origin aliases docs provenance weight 5 well respected package managers Helm can generate and verify signature files title Helm Provenance and Integrity Describes how to verify the integrity and origin of a Chart of a package Using industry standard tools based on PKI GnuPG and | ---
title: "Helm Provenance and Integrity"
description: "Describes how to verify the integrity and origin of a Chart."
aliases: ["/docs/provenance/"]
weight: 5
---
Helm has provenance tools which help chart users verify the integrity and origin
of a package. Using industry-standard tools based on PKI, GnuPG, and
well-respected package managers, Helm can generate and verify signature files.
## Overview
Integrity is established by comparing a chart to a provenance record. Provenance
records are stored in _provenance files_, which are stored alongside a packaged
chart. For example, if a chart is named `myapp-1.2.3.tgz`, its provenance file
will be `myapp-1.2.3.tgz.prov`.
Provenance files are generated at packaging time (`helm package --sign ...`),
and can be checked by multiple commands, notably `helm install --verify`.
## The Workflow
This section describes a potential workflow for using provenance data
effectively.
Prerequisites:
- A valid PGP keypair in a binary (not ASCII-armored) format
- The `helm` command line tool
- GnuPG command line tools (optional)
- Keybase command line tools (optional)
**NOTE:** If your PGP private key has a passphrase, you will be prompted to
enter that passphrase for any commands that support the `--sign` option.
Creating a new chart is the same as before:
```console
$ helm create mychart
Creating mychart
```
Once ready to package, add the `--sign` flag to `helm package`. Also, specify
the name under which the signing key is known and the keyring containing the
corresponding private key:
```console
$ helm package --sign --key 'John Smith' --keyring path/to/keyring.secret mychart
```
**Note:** The value of the `--key` argument must be a substring of the desired
key's `uid` (in the output of `gpg --list-keys`), for example the name or email.
**The fingerprint _cannot_ be used.**
**TIP:** for GnuPG users, your secret keyring is in `~/.gnupg/secring.gpg`. You
can use `gpg --list-secret-keys` to list the keys you have.
**Warning:** the GnuPG v2 store your secret keyring using a new format `kbx` on
the default location `~/.gnupg/pubring.kbx`. Please use the following command
to convert your keyring to the legacy gpg format:
```console
$ gpg --export >~/.gnupg/pubring.gpg
$ gpg --export-secret-keys >~/.gnupg/secring.gpg
```
At this point, you should see both `mychart-0.1.0.tgz` and
`mychart-0.1.0.tgz.prov`. Both files should eventually be uploaded to your
desired chart repository.
You can verify a chart using `helm verify`:
```console
$ helm verify mychart-0.1.0.tgz
```
A failed verification looks like this:
```console
$ helm verify topchart-0.1.0.tgz
Error: sha256 sum does not match for topchart-0.1.0.tgz: "sha256:1939fbf7c1023d2f6b865d137bbb600e0c42061c3235528b1e8c82f4450c12a7" != "sha256:5a391a90de56778dd3274e47d789a2c84e0e106e1a37ef8cfa51fd60ac9e623a"
```
To verify during an install, use the `--verify` flag.
```console
$ helm install --generate-name --verify mychart-0.1.0.tgz
```
If the keyring containing the public key associated with the signed chart is not
in the default location, you may need to point to the keyring with `--keyring
PATH` as in the `helm package` example.
If verification fails, the install will be aborted before the chart is even
rendered.
### Using Keybase.io credentials
The [Keybase.io](https://keybase.io) service makes it easy to establish a chain
of trust for a cryptographic identity. Keybase credentials can be used to sign
charts.
Prerequisites:
- A configured Keybase.io account
- GnuPG installed locally
- The `keybase` CLI installed locally
#### Signing packages
The first step is to import your keybase keys into your local GnuPG keyring:
```console
$ keybase pgp export -s | gpg --import
```
This will convert your Keybase key into the OpenPGP format, and then import it
locally into your `~/.gnupg/secring.gpg` file.
You can double check by running `gpg --list-secret-keys`.
```console
$ gpg --list-secret-keys
/Users/mattbutcher/.gnupg/secring.gpg
-------------------------------------
sec 2048R/1FC18762 2016-07-25
uid technosophos (keybase.io/technosophos) <[email protected]>
ssb 2048R/D125E546 2016-07-25
```
Note that your secret key will have an identifier string:
```
technosophos (keybase.io/technosophos) <[email protected]>
```
That is the full name of your key.
Next, you can package and sign a chart with `helm package`. Make sure you use at
least part of that name string in `--key`.
```console
$ helm package --sign --key technosophos --keyring ~/.gnupg/secring.gpg mychart
```
As a result, the `package` command should produce both a `.tgz` file and a
`.tgz.prov` file.
#### Verifying packages
You can also use a similar technique to verify a chart signed by someone else's
Keybase key. Say you want to verify a package signed by
`keybase.io/technosophos`. To do this, use the `keybase` tool:
```console
$ keybase follow technosophos
$ keybase pgp pull
```
The first command above tracks the user `technosophos`. Next `keybase pgp pull`
downloads the OpenPGP keys of all of the accounts you follow, placing them in
your GnuPG keyring (`~/.gnupg/pubring.gpg`).
At this point, you can now use `helm verify` or any of the commands with a
`--verify` flag:
```console
$ helm verify somechart-1.2.3.tgz
```
### Reasons a chart may not verify
These are common reasons for failure.
- The `.prov` file is missing or corrupt. This indicates that something is
misconfigured or that the original maintainer did not create a provenance
file.
- The key used to sign the file is not in your keyring. This indicate that the
entity who signed the chart is not someone you've already signaled that you
trust.
- The verification of the `.prov` file failed. This indicates that something is
wrong with either the chart or the provenance data.
- The file hashes in the provenance file do not match the hash of the archive
file. This indicates that the archive has been tampered with.
If a verification fails, there is reason to distrust the package.
## The Provenance File
The provenance file contains a chart’s YAML file plus several pieces of
verification information. Provenance files are designed to be automatically
generated.
The following pieces of provenance data are added:
* The chart file (`Chart.yaml`) is included to give both humans and tools an
easy view into the contents of the chart.
* The signature (SHA256, just like Docker) of the chart package (the `.tgz`
file) is included, and may be used to verify the integrity of the chart
package.
* The entire body is signed using the algorithm used by OpenPGP (see
[Keybase.io](https://keybase.io) for an emerging way of making crypto
signing and verification easy).
The combination of this gives users the following assurances:
* The package itself has not been tampered with (checksum package `.tgz`).
* The entity who released this package is known (via the GnuPG/PGP signature).
The format of the file looks something like this:
```
Hash: SHA512
apiVersion: v2
appVersion: "1.16.0"
description: Sample chart
name: mychart
type: application
version: 0.1.0
...
files:
mychart-0.1.0.tgz: sha256:d31d2f08b885ec696c37c7f7ef106709aaf5e8575b6d3dc5d52112ed29a9cb92
-----BEGIN PGP SIGNATURE-----
wsBcBAEBCgAQBQJdy0ReCRCEO7+YH8GHYgAAfhUIADx3pHHLLINv0MFkiEYpX/Kd
nvHFBNps7hXqSocsg0a9Fi1LRAc3OpVh3knjPfHNGOy8+xOdhbqpdnB+5ty8YopI
mYMWp6cP/Mwpkt7/gP1ecWFMevicbaFH5AmJCBihBaKJE4R1IX49/wTIaLKiWkv2
cR64bmZruQPSW83UTNULtdD7kuTZXeAdTMjAK0NECsCz9/eK5AFggP4CDf7r2zNi
hZsNrzloIlBZlGGns6mUOTO42J/+JojnOLIhI3Psd0HBD2bTlsm/rSfty4yZUs7D
qtgooNdohoyGSzR5oapd7fEvauRQswJxOA0m0V+u9/eyLR0+JcYB8Udi1prnWf8=
=aHfz
-----END PGP SIGNATURE-----
```
Note that the YAML section contains two documents (separated by `...\n`). The
first file is the content of `Chart.yaml`. The second is the checksums, a map of
filenames to SHA-256 digests of that file's content at packaging time.
The signature block is a standard PGP signature, which provides [tamper
resistance](https://www.rossde.com/PGP/pgp_signatures.html).
## Chart Repositories
Chart repositories serve as a centralized collection of Helm charts.
Chart repositories must make it possible to serve provenance files over HTTP via
a specific request, and must make them available at the same URI path as the
chart.
For example, if the base URL for a package is
`https://example.com/charts/mychart-1.2.3.tgz`, the provenance file, if it
exists, MUST be accessible at
`https://example.com/charts/mychart-1.2.3.tgz.prov`.
From the end user's perspective, `helm install --verify myrepo/mychart-1.2.3`
should result in the download of both the chart and the provenance file with no
additional user configuration or action.
### Signatures in OCI-based registries
When publishing charts to an [OCI-based registry](), the
[`helm-sigstore` plugin](https://github.com/sigstore/helm-sigstore/) can be used
to publish provenance to [sigstore](https://sigstore.dev/). [As described in the
documentation](https://github.com/sigstore/helm-sigstore/blob/main/USAGE.md), the
process of creating provenance and signing with a GPG key are common, but the
`helm sigstore upload` command can be used to publish the provenance to an
immutable transparency log.
## Establishing Authority and Authenticity
When dealing with chain-of-trust systems, it is important to be able to
establish the authority of a signer. Or, to put this plainly, the system above
hinges on the fact that you trust the person who signed the chart. That, in
turn, means you need to trust the public key of the signer.
One of the design decisions with Helm has been that the Helm project would not
insert itself into the chain of trust as a necessary party. We don't want to be
"the certificate authority" for all chart signers. Instead, we strongly favor a
decentralized model, which is part of the reason we chose OpenPGP as our
foundational technology. So when it comes to establishing authority, we have
left this step more-or-less undefined in Helm 2 (a decision carried forward in
Helm 3).
However, we have some pointers and recommendations for those interested in using
the provenance system:
- The [Keybase](https://keybase.io) platform provides a public centralized
repository for trust information.
- You can use Keybase to store your keys or to get the public keys of others.
- Keybase also has fabulous documentation available
- While we haven't tested it, Keybase's "secure website" feature could be used
to serve Helm charts.
- The basic idea is that an official "chart reviewer" signs charts with her or
his key, and the resulting provenance file is then uploaded to the chart
repository.
- There has been some work on the idea that a list of valid signing keys may
be included in the `index.yaml` file of a repository.
| helm | title Helm Provenance and Integrity description Describes how to verify the integrity and origin of a Chart aliases docs provenance weight 5 Helm has provenance tools which help chart users verify the integrity and origin of a package Using industry standard tools based on PKI GnuPG and well respected package managers Helm can generate and verify signature files Overview Integrity is established by comparing a chart to a provenance record Provenance records are stored in provenance files which are stored alongside a packaged chart For example if a chart is named myapp 1 2 3 tgz its provenance file will be myapp 1 2 3 tgz prov Provenance files are generated at packaging time helm package sign and can be checked by multiple commands notably helm install verify The Workflow This section describes a potential workflow for using provenance data effectively Prerequisites A valid PGP keypair in a binary not ASCII armored format The helm command line tool GnuPG command line tools optional Keybase command line tools optional NOTE If your PGP private key has a passphrase you will be prompted to enter that passphrase for any commands that support the sign option Creating a new chart is the same as before console helm create mychart Creating mychart Once ready to package add the sign flag to helm package Also specify the name under which the signing key is known and the keyring containing the corresponding private key console helm package sign key John Smith keyring path to keyring secret mychart Note The value of the key argument must be a substring of the desired key s uid in the output of gpg list keys for example the name or email The fingerprint cannot be used TIP for GnuPG users your secret keyring is in gnupg secring gpg You can use gpg list secret keys to list the keys you have Warning the GnuPG v2 store your secret keyring using a new format kbx on the default location gnupg pubring kbx Please use the following command to convert your keyring to the legacy gpg format console gpg export gnupg pubring gpg gpg export secret keys gnupg secring gpg At this point you should see both mychart 0 1 0 tgz and mychart 0 1 0 tgz prov Both files should eventually be uploaded to your desired chart repository You can verify a chart using helm verify console helm verify mychart 0 1 0 tgz A failed verification looks like this console helm verify topchart 0 1 0 tgz Error sha256 sum does not match for topchart 0 1 0 tgz sha256 1939fbf7c1023d2f6b865d137bbb600e0c42061c3235528b1e8c82f4450c12a7 sha256 5a391a90de56778dd3274e47d789a2c84e0e106e1a37ef8cfa51fd60ac9e623a To verify during an install use the verify flag console helm install generate name verify mychart 0 1 0 tgz If the keyring containing the public key associated with the signed chart is not in the default location you may need to point to the keyring with keyring PATH as in the helm package example If verification fails the install will be aborted before the chart is even rendered Using Keybase io credentials The Keybase io https keybase io service makes it easy to establish a chain of trust for a cryptographic identity Keybase credentials can be used to sign charts Prerequisites A configured Keybase io account GnuPG installed locally The keybase CLI installed locally Signing packages The first step is to import your keybase keys into your local GnuPG keyring console keybase pgp export s gpg import This will convert your Keybase key into the OpenPGP format and then import it locally into your gnupg secring gpg file You can double check by running gpg list secret keys console gpg list secret keys Users mattbutcher gnupg secring gpg sec 2048R 1FC18762 2016 07 25 uid technosophos keybase io technosophos technosophos keybase io ssb 2048R D125E546 2016 07 25 Note that your secret key will have an identifier string technosophos keybase io technosophos technosophos keybase io That is the full name of your key Next you can package and sign a chart with helm package Make sure you use at least part of that name string in key console helm package sign key technosophos keyring gnupg secring gpg mychart As a result the package command should produce both a tgz file and a tgz prov file Verifying packages You can also use a similar technique to verify a chart signed by someone else s Keybase key Say you want to verify a package signed by keybase io technosophos To do this use the keybase tool console keybase follow technosophos keybase pgp pull The first command above tracks the user technosophos Next keybase pgp pull downloads the OpenPGP keys of all of the accounts you follow placing them in your GnuPG keyring gnupg pubring gpg At this point you can now use helm verify or any of the commands with a verify flag console helm verify somechart 1 2 3 tgz Reasons a chart may not verify These are common reasons for failure The prov file is missing or corrupt This indicates that something is misconfigured or that the original maintainer did not create a provenance file The key used to sign the file is not in your keyring This indicate that the entity who signed the chart is not someone you ve already signaled that you trust The verification of the prov file failed This indicates that something is wrong with either the chart or the provenance data The file hashes in the provenance file do not match the hash of the archive file This indicates that the archive has been tampered with If a verification fails there is reason to distrust the package The Provenance File The provenance file contains a chart s YAML file plus several pieces of verification information Provenance files are designed to be automatically generated The following pieces of provenance data are added The chart file Chart yaml is included to give both humans and tools an easy view into the contents of the chart The signature SHA256 just like Docker of the chart package the tgz file is included and may be used to verify the integrity of the chart package The entire body is signed using the algorithm used by OpenPGP see Keybase io https keybase io for an emerging way of making crypto signing and verification easy The combination of this gives users the following assurances The package itself has not been tampered with checksum package tgz The entity who released this package is known via the GnuPG PGP signature The format of the file looks something like this Hash SHA512 apiVersion v2 appVersion 1 16 0 description Sample chart name mychart type application version 0 1 0 files mychart 0 1 0 tgz sha256 d31d2f08b885ec696c37c7f7ef106709aaf5e8575b6d3dc5d52112ed29a9cb92 BEGIN PGP SIGNATURE wsBcBAEBCgAQBQJdy0ReCRCEO7 YH8GHYgAAfhUIADx3pHHLLINv0MFkiEYpX Kd nvHFBNps7hXqSocsg0a9Fi1LRAc3OpVh3knjPfHNGOy8 xOdhbqpdnB 5ty8YopI mYMWp6cP Mwpkt7 gP1ecWFMevicbaFH5AmJCBihBaKJE4R1IX49 wTIaLKiWkv2 cR64bmZruQPSW83UTNULtdD7kuTZXeAdTMjAK0NECsCz9 eK5AFggP4CDf7r2zNi hZsNrzloIlBZlGGns6mUOTO42J JojnOLIhI3Psd0HBD2bTlsm rSfty4yZUs7D qtgooNdohoyGSzR5oapd7fEvauRQswJxOA0m0V u9 eyLR0 JcYB8Udi1prnWf8 aHfz END PGP SIGNATURE Note that the YAML section contains two documents separated by n The first file is the content of Chart yaml The second is the checksums a map of filenames to SHA 256 digests of that file s content at packaging time The signature block is a standard PGP signature which provides tamper resistance https www rossde com PGP pgp signatures html Chart Repositories Chart repositories serve as a centralized collection of Helm charts Chart repositories must make it possible to serve provenance files over HTTP via a specific request and must make them available at the same URI path as the chart For example if the base URL for a package is https example com charts mychart 1 2 3 tgz the provenance file if it exists MUST be accessible at https example com charts mychart 1 2 3 tgz prov From the end user s perspective helm install verify myrepo mychart 1 2 3 should result in the download of both the chart and the provenance file with no additional user configuration or action Signatures in OCI based registries When publishing charts to an OCI based registry the helm sigstore plugin https github com sigstore helm sigstore can be used to publish provenance to sigstore https sigstore dev As described in the documentation https github com sigstore helm sigstore blob main USAGE md the process of creating provenance and signing with a GPG key are common but the helm sigstore upload command can be used to publish the provenance to an immutable transparency log Establishing Authority and Authenticity When dealing with chain of trust systems it is important to be able to establish the authority of a signer Or to put this plainly the system above hinges on the fact that you trust the person who signed the chart That in turn means you need to trust the public key of the signer One of the design decisions with Helm has been that the Helm project would not insert itself into the chain of trust as a necessary party We don t want to be the certificate authority for all chart signers Instead we strongly favor a decentralized model which is part of the reason we chose OpenPGP as our foundational technology So when it comes to establishing authority we have left this step more or less undefined in Helm 2 a decision carried forward in Helm 3 However we have some pointers and recommendations for those interested in using the provenance system The Keybase https keybase io platform provides a public centralized repository for trust information You can use Keybase to store your keys or to get the public keys of others Keybase also has fabulous documentation available While we haven t tested it Keybase s secure website feature could be used to serve Helm charts The basic idea is that an official chart reviewer signs charts with her or his key and the resulting provenance file is then uploaded to the chart repository There has been some work on the idea that a list of valid signing keys may be included in the index yaml file of a repository |
helm weight 6 aliases docs chartrepository and shared This section explains how to create and work with Helm chart repositories At a title The Chart Repository Guide high level a chart repository is a location where packaged charts can be stored How to create and work with Helm chart repositories | ---
title: "The Chart Repository Guide"
description: "How to create and work with Helm chart repositories."
aliases: ["/docs/chart_repository/"]
weight: 6
---
This section explains how to create and work with Helm chart repositories. At a
high level, a chart repository is a location where packaged charts can be stored
and shared.
The distributed community Helm chart repository is located at
[Artifact Hub](https://artifacthub.io/packages/search?kind=0) and welcomes
participation. But Helm also makes it possible to create and run your own chart
repository. This guide explains how to do so.
## Prerequisites
* Go through the [Quickstart]() Guide
* Read through the [Charts]() document
## Create a chart repository
A _chart repository_ is an HTTP server that houses an `index.yaml` file and
optionally some packaged charts. When you're ready to share your charts, the
preferred way to do so is by uploading them to a chart repository.
As of Helm 2.2.0, client-side SSL auth to a repository is supported. Other
authentication protocols may be available as plugins.
Because a chart repository can be any HTTP server that can serve YAML and tar
files and can answer GET requests, you have a plethora of options when it comes
down to hosting your own chart repository. For example, you can use a Google
Cloud Storage (GCS) bucket, Amazon S3 bucket, GitHub Pages, or even create your
own web server.
### The chart repository structure
A chart repository consists of packaged charts and a special file called
`index.yaml` which contains an index of all of the charts in the repository.
Frequently, the charts that `index.yaml` describes are also hosted on the same
server, as are the [provenance files]().
For example, the layout of the repository `https://example.com/charts` might
look like this:
```
charts/
|
|- index.yaml
|
|- alpine-0.1.2.tgz
|
|- alpine-0.1.2.tgz.prov
```
In this case, the index file would contain information about one chart, the
Alpine chart, and provide the download URL
`https://example.com/charts/alpine-0.1.2.tgz` for that chart.
It is not required that a chart package be located on the same server as the
`index.yaml` file. However, doing so is often the easiest.
### The index file
The index file is a yaml file called `index.yaml`. It contains some metadata
about the package, including the contents of a chart's `Chart.yaml` file. A
valid chart repository must have an index file. The index file contains
information about each chart in the chart repository. The `helm repo index`
command will generate an index file based on a given local directory that
contains packaged charts.
This is an example of an index file:
```yaml
apiVersion: v1
entries:
alpine:
- created: 2016-10-06T16:23:20.499814565-06:00
description: Deploy a basic Alpine Linux pod
digest: 99c76e403d752c84ead610644d4b1c2f2b453a74b921f422b9dcb8a7c8b559cd
home: https://helm.sh/helm
name: alpine
sources:
- https://github.com/helm/helm
urls:
- https://technosophos.github.io/tscharts/alpine-0.2.0.tgz
version: 0.2.0
- created: 2016-10-06T16:23:20.499543808-06:00
description: Deploy a basic Alpine Linux pod
digest: 515c58e5f79d8b2913a10cb400ebb6fa9c77fe813287afbacf1a0b897cd78727
home: https://helm.sh/helm
name: alpine
sources:
- https://github.com/helm/helm
urls:
- https://technosophos.github.io/tscharts/alpine-0.1.0.tgz
version: 0.1.0
nginx:
- created: 2016-10-06T16:23:20.499543808-06:00
description: Create a basic nginx HTTP server
digest: aaff4545f79d8b2913a10cb400ebb6fa9c77fe813287afbacf1a0b897cdffffff
home: https://helm.sh/helm
name: nginx
sources:
- https://github.com/helm/charts
urls:
- https://technosophos.github.io/tscharts/nginx-1.1.0.tgz
version: 1.1.0
generated: 2016-10-06T16:23:20.499029981-06:00
```
## Hosting Chart Repositories
This part shows several ways to serve a chart repository.
### Google Cloud Storage
The first step is to **create your GCS bucket**. We'll call ours
`fantastic-charts`.

Next, make your bucket public by **editing the bucket permissions**.

Insert this line item to **make your bucket public**:

Congratulations, now you have an empty GCS bucket ready to serve charts!
You may upload your chart repository using the Google Cloud Storage command
line tool, or using the GCS web UI. A public GCS bucket can be accessed via
simple HTTPS at this address: `https://bucket-name.storage.googleapis.com/`.
### Cloudsmith
You can also set up chart repositories using Cloudsmith. Read more about
chart repositories with Cloudsmith
[here](https://help.cloudsmith.io/docs/helm-chart-repository)
### JFrog Artifactory
Similarly, you can also set up chart repositories using JFrog Artifactory. Read more about
chart repositories with JFrog Artifactory
[here](https://www.jfrog.com/confluence/display/RTF/Helm+Chart+Repositories)
### GitHub Pages example
In a similar way you can create charts repository using GitHub Pages.
GitHub allows you to serve static web pages in two different ways:
- By configuring a project to serve the contents of its `docs/` directory
- By configuring a project to serve a particular branch
We'll take the second approach, though the first is just as easy.
The first step will be to **create your gh-pages branch**. You can do that
locally as.
```console
$ git checkout -b gh-pages
```
Or via web browser using **Branch** button on your GitHub repository:

Next, you'll want to make sure your **gh-pages branch** is set as GitHub Pages,
click on your repo **Settings** and scroll down to **GitHub pages** section and
set as per below:

By default **Source** usually gets set to **gh-pages branch**. If this is not
set by default, then select it.
You can use a **custom domain** there if you wish so.
And check that **Enforce HTTPS** is ticked, so the **HTTPS** will be used when
charts are served.
In such setup you can use your default branch to store your charts code, and
**gh-pages branch** as charts repository, e.g.:
`https://USERNAME.github.io/REPONAME`. The demonstration [TS
Charts](https://github.com/technosophos/tscharts) repository is accessible at
`https://technosophos.github.io/tscharts/`.
If you have decided to use GitHub pages to host the chart repository, check out
[Chart Releaser Action]().
Chart Releaser Action is a GitHub Action workflow to turn a GitHub project into
a self-hosted Helm chart repo, using
[helm/chart-releaser](https://github.com/helm/chart-releaser) CLI tool.
### Ordinary web servers
To configure an ordinary web server to serve Helm charts, you merely need to do
the following:
- Put your index and charts in a directory that the server can serve
- Make sure the `index.yaml` file can be accessed with no authentication
requirement
- Make sure `yaml` files are served with the correct content type (`text/yaml`
or `text/x-yaml`)
For example, if you want to serve your charts out of `$WEBROOT/charts`, make
sure there is a `charts/` directory in your web root, and put the index file and
charts inside of that folder.
### ChartMuseum Repository Server
ChartMuseum is an open-source Helm Chart Repository server written in Go
(Golang), with support for cloud storage backends, including [Google Cloud
Storage](https://cloud.google.com/storage/), [Amazon
S3](https://aws.amazon.com/s3/), [Microsoft Azure Blob
Storage](https://azure.microsoft.com/en-us/services/storage/blobs/), [Alibaba
Cloud OSS Storage](https://www.alibabacloud.com/product/oss), [Openstack Object
Storage](https://developer.openstack.org/api-ref/object-store/), [Oracle Cloud
Infrastructure Object Storage](https://cloud.oracle.com/storage), [Baidu Cloud
BOS Storage](https://cloud.baidu.com/product/bos.html), [Tencent Cloud Object
Storage](https://intl.cloud.tencent.com/product/cos), [DigitalOcean
Spaces](https://www.digitalocean.com/products/spaces/),
[Minio](https://min.io/), and [etcd](https://etcd.io/).
You can also use the
[ChartMuseum](https://chartmuseum.com/docs/#using-with-local-filesystem-storage)
server to host a chart repository from a local file system.
### GitLab Package Registry
With GitLab you can publish Helm charts in your project’s Package Registry.
Read more about setting up a helm package repository with GitLab [here](https://docs.gitlab.com/ee/user/packages/helm_repository/).
## Managing Chart Repositories
Now that you have a chart repository, the last part of this guide explains how
to maintain charts in that repository.
### Store charts in your chart repository
Now that you have a chart repository, let's upload a chart and an index file to
the repository. Charts in a chart repository must be packaged (`helm package
chart-name/`) and versioned correctly (following [SemVer 2](https://semver.org/)
guidelines).
These next steps compose an example workflow, but you are welcome to use
whatever workflow you fancy for storing and updating charts in your chart
repository.
Once you have a packaged chart ready, create a new directory, and move your
packaged chart to that directory.
```console
$ helm package docs/examples/alpine/
$ mkdir fantastic-charts
$ mv alpine-0.1.0.tgz fantastic-charts/
$ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
```
The last command takes the path of the local directory that you just created and
the URL of your remote chart repository and composes an `index.yaml` file inside
the given directory path.
Now you can upload the chart and the index file to your chart repository using a
sync tool or manually. If you're using Google Cloud Storage, check out this
[example workflow]()
using the gsutil client. For GitHub, you can simply put the charts in the
appropriate destination branch.
### Add new charts to an existing repository
Each time you want to add a new chart to your repository, you must regenerate
the index. The `helm repo index` command will completely rebuild the
`index.yaml` file from scratch, including only the charts that it finds locally.
However, you can use the `--merge` flag to incrementally add new charts to an
existing `index.yaml` file (a great option when working with a remote repository
like GCS). Run `helm repo index --help` to learn more,
Make sure that you upload both the revised `index.yaml` file and the chart. And
if you generated a provenance file, upload that too.
### Share your charts with others
When you're ready to share your charts, simply let someone know what the URL of
your repository is.
From there, they will add the repository to their helm client via the `helm repo
add [NAME] [URL]` command with any name they would like to use to reference the
repository.
```console
$ helm repo add fantastic-charts https://fantastic-charts.storage.googleapis.com
$ helm repo list
fantastic-charts https://fantastic-charts.storage.googleapis.com
```
If the charts are backed by HTTP basic authentication, you can also supply the
username and password here:
```console
$ helm repo add fantastic-charts https://fantastic-charts.storage.googleapis.com --username my-username --password my-password
$ helm repo list
fantastic-charts https://fantastic-charts.storage.googleapis.com
```
**Note:** A repository will not be added if it does not contain a valid
`index.yaml`.
**Note:** If your helm repository is e.g. using a self signed
certificate, you can use `helm repo add --insecure-skip-tls-verify ...` in order
to skip the CA verification.
After that, your users will be able to search through your charts. After you've
updated the repository, they can use the `helm repo update` command to get the
latest chart information.
*Under the hood, the `helm repo add` and `helm repo update` commands are
fetching the index.yaml file and storing them in the
`$XDG_CACHE_HOME/helm/repository/cache/` directory. This is where the `helm
search` function finds information about charts.* | helm | title The Chart Repository Guide description How to create and work with Helm chart repositories aliases docs chart repository weight 6 This section explains how to create and work with Helm chart repositories At a high level a chart repository is a location where packaged charts can be stored and shared The distributed community Helm chart repository is located at Artifact Hub https artifacthub io packages search kind 0 and welcomes participation But Helm also makes it possible to create and run your own chart repository This guide explains how to do so Prerequisites Go through the Quickstart Guide Read through the Charts document Create a chart repository A chart repository is an HTTP server that houses an index yaml file and optionally some packaged charts When you re ready to share your charts the preferred way to do so is by uploading them to a chart repository As of Helm 2 2 0 client side SSL auth to a repository is supported Other authentication protocols may be available as plugins Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests you have a plethora of options when it comes down to hosting your own chart repository For example you can use a Google Cloud Storage GCS bucket Amazon S3 bucket GitHub Pages or even create your own web server The chart repository structure A chart repository consists of packaged charts and a special file called index yaml which contains an index of all of the charts in the repository Frequently the charts that index yaml describes are also hosted on the same server as are the provenance files For example the layout of the repository https example com charts might look like this charts index yaml alpine 0 1 2 tgz alpine 0 1 2 tgz prov In this case the index file would contain information about one chart the Alpine chart and provide the download URL https example com charts alpine 0 1 2 tgz for that chart It is not required that a chart package be located on the same server as the index yaml file However doing so is often the easiest The index file The index file is a yaml file called index yaml It contains some metadata about the package including the contents of a chart s Chart yaml file A valid chart repository must have an index file The index file contains information about each chart in the chart repository The helm repo index command will generate an index file based on a given local directory that contains packaged charts This is an example of an index file yaml apiVersion v1 entries alpine created 2016 10 06T16 23 20 499814565 06 00 description Deploy a basic Alpine Linux pod digest 99c76e403d752c84ead610644d4b1c2f2b453a74b921f422b9dcb8a7c8b559cd home https helm sh helm name alpine sources https github com helm helm urls https technosophos github io tscharts alpine 0 2 0 tgz version 0 2 0 created 2016 10 06T16 23 20 499543808 06 00 description Deploy a basic Alpine Linux pod digest 515c58e5f79d8b2913a10cb400ebb6fa9c77fe813287afbacf1a0b897cd78727 home https helm sh helm name alpine sources https github com helm helm urls https technosophos github io tscharts alpine 0 1 0 tgz version 0 1 0 nginx created 2016 10 06T16 23 20 499543808 06 00 description Create a basic nginx HTTP server digest aaff4545f79d8b2913a10cb400ebb6fa9c77fe813287afbacf1a0b897cdffffff home https helm sh helm name nginx sources https github com helm charts urls https technosophos github io tscharts nginx 1 1 0 tgz version 1 1 0 generated 2016 10 06T16 23 20 499029981 06 00 Hosting Chart Repositories This part shows several ways to serve a chart repository Google Cloud Storage The first step is to create your GCS bucket We ll call ours fantastic charts Create a GCS Bucket https helm sh img create a bucket png Next make your bucket public by editing the bucket permissions Edit Permissions https helm sh img edit permissions png Insert this line item to make your bucket public Make Bucket Public https helm sh img make bucket public png Congratulations now you have an empty GCS bucket ready to serve charts You may upload your chart repository using the Google Cloud Storage command line tool or using the GCS web UI A public GCS bucket can be accessed via simple HTTPS at this address https bucket name storage googleapis com Cloudsmith You can also set up chart repositories using Cloudsmith Read more about chart repositories with Cloudsmith here https help cloudsmith io docs helm chart repository JFrog Artifactory Similarly you can also set up chart repositories using JFrog Artifactory Read more about chart repositories with JFrog Artifactory here https www jfrog com confluence display RTF Helm Chart Repositories GitHub Pages example In a similar way you can create charts repository using GitHub Pages GitHub allows you to serve static web pages in two different ways By configuring a project to serve the contents of its docs directory By configuring a project to serve a particular branch We ll take the second approach though the first is just as easy The first step will be to create your gh pages branch You can do that locally as console git checkout b gh pages Or via web browser using Branch button on your GitHub repository Create GitHub Pages branch https helm sh img create a gh page button png Next you ll want to make sure your gh pages branch is set as GitHub Pages click on your repo Settings and scroll down to GitHub pages section and set as per below Create GitHub Pages branch https helm sh img set a gh page png By default Source usually gets set to gh pages branch If this is not set by default then select it You can use a custom domain there if you wish so And check that Enforce HTTPS is ticked so the HTTPS will be used when charts are served In such setup you can use your default branch to store your charts code and gh pages branch as charts repository e g https USERNAME github io REPONAME The demonstration TS Charts https github com technosophos tscharts repository is accessible at https technosophos github io tscharts If you have decided to use GitHub pages to host the chart repository check out Chart Releaser Action Chart Releaser Action is a GitHub Action workflow to turn a GitHub project into a self hosted Helm chart repo using helm chart releaser https github com helm chart releaser CLI tool Ordinary web servers To configure an ordinary web server to serve Helm charts you merely need to do the following Put your index and charts in a directory that the server can serve Make sure the index yaml file can be accessed with no authentication requirement Make sure yaml files are served with the correct content type text yaml or text x yaml For example if you want to serve your charts out of WEBROOT charts make sure there is a charts directory in your web root and put the index file and charts inside of that folder ChartMuseum Repository Server ChartMuseum is an open source Helm Chart Repository server written in Go Golang with support for cloud storage backends including Google Cloud Storage https cloud google com storage Amazon S3 https aws amazon com s3 Microsoft Azure Blob Storage https azure microsoft com en us services storage blobs Alibaba Cloud OSS Storage https www alibabacloud com product oss Openstack Object Storage https developer openstack org api ref object store Oracle Cloud Infrastructure Object Storage https cloud oracle com storage Baidu Cloud BOS Storage https cloud baidu com product bos html Tencent Cloud Object Storage https intl cloud tencent com product cos DigitalOcean Spaces https www digitalocean com products spaces Minio https min io and etcd https etcd io You can also use the ChartMuseum https chartmuseum com docs using with local filesystem storage server to host a chart repository from a local file system GitLab Package Registry With GitLab you can publish Helm charts in your project s Package Registry Read more about setting up a helm package repository with GitLab here https docs gitlab com ee user packages helm repository Managing Chart Repositories Now that you have a chart repository the last part of this guide explains how to maintain charts in that repository Store charts in your chart repository Now that you have a chart repository let s upload a chart and an index file to the repository Charts in a chart repository must be packaged helm package chart name and versioned correctly following SemVer 2 https semver org guidelines These next steps compose an example workflow but you are welcome to use whatever workflow you fancy for storing and updating charts in your chart repository Once you have a packaged chart ready create a new directory and move your packaged chart to that directory console helm package docs examples alpine mkdir fantastic charts mv alpine 0 1 0 tgz fantastic charts helm repo index fantastic charts url https fantastic charts storage googleapis com The last command takes the path of the local directory that you just created and the URL of your remote chart repository and composes an index yaml file inside the given directory path Now you can upload the chart and the index file to your chart repository using a sync tool or manually If you re using Google Cloud Storage check out this example workflow using the gsutil client For GitHub you can simply put the charts in the appropriate destination branch Add new charts to an existing repository Each time you want to add a new chart to your repository you must regenerate the index The helm repo index command will completely rebuild the index yaml file from scratch including only the charts that it finds locally However you can use the merge flag to incrementally add new charts to an existing index yaml file a great option when working with a remote repository like GCS Run helm repo index help to learn more Make sure that you upload both the revised index yaml file and the chart And if you generated a provenance file upload that too Share your charts with others When you re ready to share your charts simply let someone know what the URL of your repository is From there they will add the repository to their helm client via the helm repo add NAME URL command with any name they would like to use to reference the repository console helm repo add fantastic charts https fantastic charts storage googleapis com helm repo list fantastic charts https fantastic charts storage googleapis com If the charts are backed by HTTP basic authentication you can also supply the username and password here console helm repo add fantastic charts https fantastic charts storage googleapis com username my username password my password helm repo list fantastic charts https fantastic charts storage googleapis com Note A repository will not be added if it does not contain a valid index yaml Note If your helm repository is e g using a self signed certificate you can use helm repo add insecure skip tls verify in order to skip the CA verification After that your users will be able to search through your charts After you ve updated the repository they can use the helm repo update command to get the latest chart information Under the hood the helm repo add and helm repo update commands are fetching the index yaml file and storing them in the XDG CACHE HOME helm repository cache directory This is where the helm search function finds information about charts |
helm aliases docs k8sapis title Deprecated Kubernetes APIs Kubernetes is an API driven system and the API evolves over time to reflect the Explains deprecated Kubernetes APIs in Helm systems and their APIs An important part of evolving APIs is a good deprecation evolving understanding of the problem space This is common practice across policy and process to inform users of how changes to APIs are implemented In | ---
title: "Deprecated Kubernetes APIs"
description: "Explains deprecated Kubernetes APIs in Helm"
aliases: ["docs/k8s_apis/"]
---
Kubernetes is an API-driven system and the API evolves over time to reflect the
evolving understanding of the problem space. This is common practice across
systems and their APIs. An important part of evolving APIs is a good deprecation
policy and process to inform users of how changes to APIs are implemented. In
other words, consumers of your API need to know in advance and in what release
an API will be removed or changed. This removes the element of surprise and
breaking changes to consumers.
The [Kubernetes deprecation
policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/)
documents how Kubernetes handles the changes to its API versions. The policy for
deprecation states the timeframe that API versions will be supported following a
deprecation announcement. It is therefore important to be aware of deprecation
announcements and know when API versions will be removed, to help minimize the
effect.
This is an example of an announcement [for the removal of deprecated API
versions in Kubernetes
1.16](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) and was
advertised a few months prior to the release. These API versions would have been
announced for deprecation prior to this again. This shows that there is a good
policy in place which informs consumers of API version support.
Helm templates specify a [Kubernetes API
group](https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-groups)
when defining a Kubernetes object, similar to a Kubernetes manifest file. It is
specified in the `apiVersion` field of the template and it identifies the API
version of the Kubernetes object. This means that Helm users and chart
maintainers need to be aware when Kubernetes API versions have been deprecated
and in what Kubernetes version they will be removed.
## Chart Maintainers
You should audit your charts checking for Kubernetes API versions that are
deprecated or are removed in a Kubernetes version. The API versions found as due
to be or that are now out of support, should be updated to the supported version
and a new version of the chart released. The API version is defined by the
`kind` and `apiVersion` fields. For example, here is a removed `Deployment`
object API version in Kubernetes 1.16:
```yaml
apiVersion: apps/v1beta1
kind: Deployment
```
## Helm Users
You should audit the charts that you use (similar to [chart
maintainers](#chart-maintainers)) and identify any charts where API versions are
deprecated or removed in a Kubernetes version. For the charts identified, you
need to check for the latest version of the chart (which has supported API
versions) or update the chart yourself.
Additionally, you also need to audit any charts deployed (i.e. Helm releases)
checking again for any deprecated or removed API versions. This can be done by
getting details of a release using the `helm get manifest` command.
The means for updating a Helm release to supported APIs depends on your findings
as follows:
1. If you find deprecated API versions only then:
- Perform a `helm upgrade` with a version of the chart with supported
Kubernetes API versions
- Add a description in the upgrade, something along the lines to not perform a
rollback to a Helm version prior to this current version
2. If you find any API version(s) that is/are removed in a Kubernetes version
then:
- If you are running a Kubernetes version where the API version(s) are still
available (for example, you are on Kubernetes 1.15 and found you use APIs
that will be removed in Kubernetes 1.16):
- Follow the step 1 procedure
- Otherwise (for example, you are already running a Kubernetes version where
some API versions reported by `helm get manifest` are no longer available):
- You need to edit the release manifest that is stored in the cluster to
update the API versions to supported APIs. See [Updating API Versions of a
Release Manifest](#updating-api-versions-of-a-release-manifest) for more
details
> Note: In all cases of updating a Helm release with supported APIs, you should
never rollback the release to a version prior to the release version with the
supported APIs.
> Recommendation: The best practice is to upgrade releases using deprecated API
versions to supported API versions, prior to upgrading to a kubernetes cluster
that removes those API versions.
If you don't update a release as suggested previously, you will have an error
similar to the following when trying to upgrade a release in a Kubernetes
version where its API version(s) is/are removed:
```
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s)
for this kubernetes version and it is therefore unable to build the kubernetes
objects for performing the diff. error from kubernetes: unable to recognize "":
no matches for kind "Deployment" in version "apps/v1beta1"
```
Helm fails in this scenario because it attempts to create a diff patch between
the current deployed release (which contains the Kubernetes APIs that are
removed in this Kubernetes version) against the chart you are passing with the
updated/supported API versions. The underlying reason for failure is that when
Kubernetes removes an API version, the Kubernetes Go client library can no
longer parse the deprecated objects and Helm therefore fails when calling the
library. Helm unfortunately is unable to recover from this situation and is no
longer able to manage such a release. See [Updating API Versions of a Release
Manifest](#updating-api-versions-of-a-release-manifest) for more details on how
to recover from this scenario.
## Updating API Versions of a Release Manifest
The manifest is a property of the Helm release object which is stored in the
data field of a Secret (default) or ConfigMap in the cluster. The data field
contains a gzipped object which is base 64 encoded (there is an additional base
64 encoding for a Secret). There is a Secret/ConfigMap per release
version/revision in the namespace of the release.
You can use the Helm [mapkubeapis](https://github.com/helm/helm-mapkubeapis)
plugin to perform the update of a release to supported APIs. Check out the
readme for more details.
Alternatively, you can follow these manual steps to perform an update of the API
versions of a release manifest. Depending on your configuration you will follow
the steps for the Secret or ConfigMap backend.
- Get the name of the Secret or Configmap associated with the latest deployed
release:
- Secrets backend: `kubectl get secret -l
owner=helm,status=deployed,name=<release_name> --namespace
<release_namespace> | awk '{print $1}' | grep -v NAME`
- ConfigMap backend: `kubectl get configmap -l
owner=helm,status=deployed,name=<release_name> --namespace
<release_namespace> | awk '{print $1}' | grep -v NAME`
- Get latest deployed release details:
- Secrets backend: `kubectl get secret <release_secret_name> -n
<release_namespace> -o yaml > release.yaml`
- ConfigMap backend: `kubectl get configmap <release_configmap_name> -n
<release_namespace> -o yaml > release.yaml`
- Backup the release in case you need to restore if something goes wrong:
- `cp release.yaml release.bak`
- In case of emergency, restore: `kubectl apply -f release.bak -n
<release_namespace>`
- Decode the release object:
- Secrets backend:`cat release.yaml | grep -oP '(?<=release: ).*' | base64 -d
| base64 -d | gzip -d > release.data.decoded`
- ConfigMap backend: `cat release.yaml | grep -oP '(?<=release: ).*' | base64
-d | gzip -d > release.data.decoded`
- Change API versions of the manifests. Can use any tool (e.g. editor) to make
the changes. This is in the `manifest` field of your decoded release object
(`release.data.decoded`)
- Encode the release object:
- Secrets backend: `cat release.data.decoded | gzip | base64 | base64`
- ConfigMap backend: `cat release.data.decoded | gzip | base64`
- Replace `data.release` property value in the deployed release file
(`release.yaml`) with the new encoded release object
- Apply file to namespace: `kubectl apply -f release.yaml -n
<release_namespace>`
- Perform a `helm upgrade` with a version of the chart with supported Kubernetes
API versions
- Add a description in the upgrade, something along the lines to not perform a
rollback to a Helm version prior to this current version | helm | title Deprecated Kubernetes APIs description Explains deprecated Kubernetes APIs in Helm aliases docs k8s apis Kubernetes is an API driven system and the API evolves over time to reflect the evolving understanding of the problem space This is common practice across systems and their APIs An important part of evolving APIs is a good deprecation policy and process to inform users of how changes to APIs are implemented In other words consumers of your API need to know in advance and in what release an API will be removed or changed This removes the element of surprise and breaking changes to consumers The Kubernetes deprecation policy https kubernetes io docs reference using api deprecation policy documents how Kubernetes handles the changes to its API versions The policy for deprecation states the timeframe that API versions will be supported following a deprecation announcement It is therefore important to be aware of deprecation announcements and know when API versions will be removed to help minimize the effect This is an example of an announcement for the removal of deprecated API versions in Kubernetes 1 16 https kubernetes io blog 2019 07 18 api deprecations in 1 16 and was advertised a few months prior to the release These API versions would have been announced for deprecation prior to this again This shows that there is a good policy in place which informs consumers of API version support Helm templates specify a Kubernetes API group https kubernetes io docs concepts overview kubernetes api api groups when defining a Kubernetes object similar to a Kubernetes manifest file It is specified in the apiVersion field of the template and it identifies the API version of the Kubernetes object This means that Helm users and chart maintainers need to be aware when Kubernetes API versions have been deprecated and in what Kubernetes version they will be removed Chart Maintainers You should audit your charts checking for Kubernetes API versions that are deprecated or are removed in a Kubernetes version The API versions found as due to be or that are now out of support should be updated to the supported version and a new version of the chart released The API version is defined by the kind and apiVersion fields For example here is a removed Deployment object API version in Kubernetes 1 16 yaml apiVersion apps v1beta1 kind Deployment Helm Users You should audit the charts that you use similar to chart maintainers chart maintainers and identify any charts where API versions are deprecated or removed in a Kubernetes version For the charts identified you need to check for the latest version of the chart which has supported API versions or update the chart yourself Additionally you also need to audit any charts deployed i e Helm releases checking again for any deprecated or removed API versions This can be done by getting details of a release using the helm get manifest command The means for updating a Helm release to supported APIs depends on your findings as follows 1 If you find deprecated API versions only then Perform a helm upgrade with a version of the chart with supported Kubernetes API versions Add a description in the upgrade something along the lines to not perform a rollback to a Helm version prior to this current version 2 If you find any API version s that is are removed in a Kubernetes version then If you are running a Kubernetes version where the API version s are still available for example you are on Kubernetes 1 15 and found you use APIs that will be removed in Kubernetes 1 16 Follow the step 1 procedure Otherwise for example you are already running a Kubernetes version where some API versions reported by helm get manifest are no longer available You need to edit the release manifest that is stored in the cluster to update the API versions to supported APIs See Updating API Versions of a Release Manifest updating api versions of a release manifest for more details Note In all cases of updating a Helm release with supported APIs you should never rollback the release to a version prior to the release version with the supported APIs Recommendation The best practice is to upgrade releases using deprecated API versions to supported API versions prior to upgrading to a kubernetes cluster that removes those API versions If you don t update a release as suggested previously you will have an error similar to the following when trying to upgrade a release in a Kubernetes version where its API version s is are removed Error UPGRADE FAILED current release manifest contains removed kubernetes api s for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff error from kubernetes unable to recognize no matches for kind Deployment in version apps v1beta1 Helm fails in this scenario because it attempts to create a diff patch between the current deployed release which contains the Kubernetes APIs that are removed in this Kubernetes version against the chart you are passing with the updated supported API versions The underlying reason for failure is that when Kubernetes removes an API version the Kubernetes Go client library can no longer parse the deprecated objects and Helm therefore fails when calling the library Helm unfortunately is unable to recover from this situation and is no longer able to manage such a release See Updating API Versions of a Release Manifest updating api versions of a release manifest for more details on how to recover from this scenario Updating API Versions of a Release Manifest The manifest is a property of the Helm release object which is stored in the data field of a Secret default or ConfigMap in the cluster The data field contains a gzipped object which is base 64 encoded there is an additional base 64 encoding for a Secret There is a Secret ConfigMap per release version revision in the namespace of the release You can use the Helm mapkubeapis https github com helm helm mapkubeapis plugin to perform the update of a release to supported APIs Check out the readme for more details Alternatively you can follow these manual steps to perform an update of the API versions of a release manifest Depending on your configuration you will follow the steps for the Secret or ConfigMap backend Get the name of the Secret or Configmap associated with the latest deployed release Secrets backend kubectl get secret l owner helm status deployed name release name namespace release namespace awk print 1 grep v NAME ConfigMap backend kubectl get configmap l owner helm status deployed name release name namespace release namespace awk print 1 grep v NAME Get latest deployed release details Secrets backend kubectl get secret release secret name n release namespace o yaml release yaml ConfigMap backend kubectl get configmap release configmap name n release namespace o yaml release yaml Backup the release in case you need to restore if something goes wrong cp release yaml release bak In case of emergency restore kubectl apply f release bak n release namespace Decode the release object Secrets backend cat release yaml grep oP release base64 d base64 d gzip d release data decoded ConfigMap backend cat release yaml grep oP release base64 d gzip d release data decoded Change API versions of the manifests Can use any tool e g editor to make the changes This is in the manifest field of your decoded release object release data decoded Encode the release object Secrets backend cat release data decoded gzip base64 base64 ConfigMap backend cat release data decoded gzip base64 Replace data release property value in the deployed release file release yaml with the new encoded release object Apply file to namespace kubectl apply f release yaml n release namespace Perform a helm upgrade with a version of the chart with supported Kubernetes API versions Add a description in the upgrade something along the lines to not perform a rollback to a Helm version prior to this current version |
helm title Charts docs developingcharts aliases Explains the chart format and provides basic guidance for building charts with Helm weight 1 developingcharts | ---
title: "Charts"
description: "Explains the chart format, and provides basic guidance for building charts with Helm."
aliases: [
"docs/developing_charts/",
"developing_charts"
]
weight: 1
---
Helm uses a packaging format called _charts_. A chart is a collection of files
that describe a related set of Kubernetes resources. A single chart might be
used to deploy something simple, like a memcached pod, or something complex,
like a full web app stack with HTTP servers, databases, caches, and so on.
Charts are created as files laid out in a particular directory tree. They can be
packaged into versioned archives to be deployed.
If you want to download and look at the files for a published chart, without
installing it, you can do so with `helm pull chartrepo/chartname`.
This document explains the chart format, and provides basic guidance for
building charts with Helm.
## The Chart File Structure
A chart is organized as a collection of files inside of a directory. The
directory name is the name of the chart (without versioning information). Thus,
a chart describing WordPress would be stored in a `wordpress/` directory.
Inside of this directory, Helm will expect a structure that matches this:
```text
wordpress/
Chart.yaml # A YAML file containing information about the chart
LICENSE # OPTIONAL: A plain text file containing the license for the chart
README.md # OPTIONAL: A human-readable README file
values.yaml # The default configuration values for this chart
values.schema.json # OPTIONAL: A JSON Schema for imposing a structure on the values.yaml file
charts/ # A directory containing any charts upon which this chart depends.
crds/ # Custom Resource Definitions
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes
```
Helm reserves use of the `charts/`, `crds/`, and `templates/` directories, and
of the listed file names. Other files will be left as they are.
## The Chart.yaml File
The `Chart.yaml` file is required for a chart. It contains the following fields:
```yaml
apiVersion: The chart API version (required)
name: The name of the chart (required)
version: A SemVer 2 version (required)
kubeVersion: A SemVer range of compatible Kubernetes versions (optional)
description: A single-sentence description of this project (optional)
type: The type of the chart (optional)
keywords:
- A list of keywords about this project (optional)
home: The URL of this projects home page (optional)
sources:
- A list of URLs to source code for this project (optional)
dependencies: # A list of the chart requirements (optional)
- name: The name of the chart (nginx)
version: The version of the chart ("1.2.3")
repository: (optional) The repository URL ("https://example.com/charts") or alias ("@repo-name")
condition: (optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (e.g. subchart1.enabled )
tags: # (optional)
- Tags can be used to group charts for enabling/disabling together
import-values: # (optional)
- ImportValues holds the mapping of source values to parent key to be imported. Each item can be a string or pair of child/parent sublist items.
alias: (optional) Alias to be used for the chart. Useful when you have to add the same chart multiple times
maintainers: # (optional)
- name: The maintainers name (required for each maintainer)
email: The maintainers email (optional for each maintainer)
url: A URL for the maintainer (optional for each maintainer)
icon: A URL to an SVG or PNG image to be used as an icon (optional).
appVersion: The version of the app that this contains (optional). Needn't be SemVer. Quotes recommended.
deprecated: Whether this chart is deprecated (optional, boolean)
annotations:
example: A list of annotations keyed by name (optional).
```
As of [v3.3.2](https://github.com/helm/helm/releases/tag/v3.3.2), additional
fields are not allowed.
The recommended approach is to add custom metadata in `annotations`.
### Charts and Versioning
Every chart must have a version number. A version must follow the [SemVer
2](https://semver.org/spec/v2.0.0.html) standard. Unlike Helm Classic, Helm v2
and later uses version numbers as release markers. Packages in repositories are
identified by name plus version.
For example, an `nginx` chart whose version field is set to `version: 1.2.3`
will be named:
```text
nginx-1.2.3.tgz
```
More complex SemVer 2 names are also supported, such as `version:
1.2.3-alpha.1+ef365`. But non-SemVer names are explicitly disallowed by the
system.
**NOTE:** Whereas Helm Classic and Deployment Manager were both very GitHub
oriented when it came to charts, Helm v2 and later does not rely upon or require
GitHub or even Git. Consequently, it does not use Git SHAs for versioning at
all.
The `version` field inside of the `Chart.yaml` is used by many of the Helm
tools, including the CLI. When generating a package, the `helm package` command
will use the version that it finds in the `Chart.yaml` as a token in the package
name. The system assumes that the version number in the chart package name
matches the version number in the `Chart.yaml`. Failure to meet this assumption
will cause an error.
### The `apiVersion` Field
The `apiVersion` field should be `v2` for Helm charts that require at least Helm
3. Charts supporting previous Helm versions have an `apiVersion` set to `v1` and
are still installable by Helm 3.
Changes from `v1` to `v2`:
- A `dependencies` field defining chart dependencies, which were located in a
separate `requirements.yaml` file for `v1` charts (see [Chart
Dependencies](#chart-dependencies)).
- The `type` field, discriminating application and library charts (see [Chart
Types](#chart-types)).
### The `appVersion` Field
Note that the `appVersion` field is not related to the `version` field. It is a
way of specifying the version of the application. For example, the `drupal`
chart may have an `appVersion: "8.2.1"`, indicating that the version of Drupal
included in the chart (by default) is `8.2.1`. This field is informational, and
has no impact on chart version calculations. Wrapping the version in quotes is highly recommended. It forces the YAML parser to treat the version number as a string. Leaving it unquoted can lead to parsing issues in some cases. For example, YAML interprets `1.0` as a floating point value, and a git commit SHA like `1234e10` as scientific notation.
As of Helm v3.5.0, `helm create` wraps the default `appVersion` field in quotes.
### The `kubeVersion` Field
The optional `kubeVersion` field can define semver constraints on supported
Kubernetes versions. Helm will validate the version constraints when installing
the chart and fail if the cluster runs an unsupported Kubernetes version.
Version constraints may comprise space separated AND comparisons such as
```
>= 1.13.0 < 1.15.0
```
which themselves can be combined with the OR `||` operator like in the following
example
```
>= 1.13.0 < 1.14.0 || >= 1.14.1 < 1.15.0
```
In this example the version `1.14.0` is excluded, which can make sense if a bug
in certain versions is known to prevent the chart from running properly.
Apart from version constrains employing operators `=` `!=` `>` `<` `>=` `<=` the
following shorthand notations are supported
* hyphen ranges for closed intervals, where `1.1 - 2.3.4` is equivalent to `>=
1.1 <= 2.3.4`.
* wildcards `x`, `X` and `*`, where `1.2.x` is equivalent to `>= 1.2.0 <
1.3.0`.
* tilde ranges (patch version changes allowed), where `~1.2.3` is equivalent to
`>= 1.2.3 < 1.3.0`.
* caret ranges (minor version changes allowed), where `^1.2.3` is equivalent to
`>= 1.2.3 < 2.0.0`.
For a detailed explanation of supported semver constraints see
[Masterminds/semver](https://github.com/Masterminds/semver).
### Deprecating Charts
When managing charts in a Chart Repository, it is sometimes necessary to
deprecate a chart. The optional `deprecated` field in `Chart.yaml` can be used
to mark a chart as deprecated. If the **latest** version of a chart in the
repository is marked as deprecated, then the chart as a whole is considered to
be deprecated. The chart name can be later reused by publishing a newer version
that is not marked as deprecated. The workflow for deprecating charts is:
1. Update chart's `Chart.yaml` to mark the chart as deprecated, bumping the
version
2. Release the new chart version in the Chart Repository
3. Remove the chart from the source repository (e.g. git)
### Chart Types
The `type` field defines the type of chart. There are two types: `application`
and `library`. Application is the default type and it is the standard chart
which can be operated on fully. The [library chart]() provides utilities or functions for the
chart builder. A library chart differs from an application chart because it is
not installable and usually doesn't contain any resource objects.
**Note:** An application chart can be used as a library chart. This is enabled
by setting the type to `library`. The chart will then be rendered as a library
chart where all utilities and functions can be leveraged. All resource objects
of the chart will not be rendered.
## Chart LICENSE, README and NOTES
Charts can also contain files that describe the installation, configuration,
usage and license of a chart.
A LICENSE is a plain text file containing the
[license](https://en.wikipedia.org/wiki/Software_license) for the chart. The
chart can contain a license as it may have programming logic in the templates
and would therefore not be configuration only. There can also be separate
license(s) for the application installed by the chart, if required.
A README for a chart should be formatted in Markdown (README.md), and should
generally contain:
- A description of the application or service the chart provides
- Any prerequisites or requirements to run the chart
- Descriptions of options in `values.yaml` and default values
- Any other information that may be relevant to the installation or
configuration of the chart
When hubs and other user interfaces display details about a chart that detail is
pulled from the content in the `README.md` file.
The chart can also contain a short plain text `templates/NOTES.txt` file that
will be printed out after installation, and when viewing the status of a
release. This file is evaluated as a [template](#templates-and-values), and can
be used to display usage notes, next steps, or any other information relevant to
a release of the chart. For example, instructions could be provided for
connecting to a database, or accessing a web UI. Since this file is printed to
STDOUT when running `helm install` or `helm status`, it is recommended to keep
the content brief and point to the README for greater detail.
## Chart Dependencies
In Helm, one chart may depend on any number of other charts. These dependencies
can be dynamically linked using the `dependencies` field in `Chart.yaml` or
brought in to the `charts/` directory and managed manually.
### Managing Dependencies with the `dependencies` field
The charts required by the current chart are defined as a list in the
`dependencies` field.
```yaml
dependencies:
- name: apache
version: 1.2.3
repository: https://example.com/charts
- name: mysql
version: 3.2.1
repository: https://another.example.com/charts
```
- The `name` field is the name of the chart you want.
- The `version` field is the version of the chart you want.
- The `repository` field is the full URL to the chart repository. Note that you
must also use `helm repo add` to add that repo locally.
- You might use the name of the repo instead of URL
```console
$ helm repo add fantastic-charts https://charts.helm.sh/incubator
```
```yaml
dependencies:
- name: awesomeness
version: 1.0.0
repository: "@fantastic-charts"
```
Once you have defined dependencies, you can run `helm dependency update` and it
will use your dependency file to download all the specified charts into your
`charts/` directory for you.
```console
$ helm dep up foochart
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "example" chart repository
...Successfully got an update from the "another" chart repository
Update Complete. Happy Helming!
Saving 2 charts
Downloading apache from repo https://example.com/charts
Downloading mysql from repo https://another.example.com/charts
```
When `helm dependency update` retrieves charts, it will store them as chart
archives in the `charts/` directory. So for the example above, one would expect
to see the following files in the charts directory:
```text
charts/
apache-1.2.3.tgz
mysql-3.2.1.tgz
```
#### Alias field in dependencies
In addition to the other fields above, each requirements entry may contain the
optional field `alias`.
Adding an alias for a dependency chart would put a chart in dependencies using
alias as name of new dependency.
One can use `alias` in cases where they need to access a chart with other
name(s).
```yaml
# parentchart/Chart.yaml
dependencies:
- name: subchart
repository: http://localhost:10191
version: 0.1.0
alias: new-subchart-1
- name: subchart
repository: http://localhost:10191
version: 0.1.0
alias: new-subchart-2
- name: subchart
repository: http://localhost:10191
version: 0.1.0
```
In the above example we will get 3 dependencies in all for `parentchart`:
```text
subchart
new-subchart-1
new-subchart-2
```
The manual way of achieving this is by copy/pasting the same chart in the
`charts/` directory multiple times with different names.
#### Tags and Condition fields in dependencies
In addition to the other fields above, each requirements entry may contain the
optional fields `tags` and `condition`.
All charts are loaded by default. If `tags` or `condition` fields are present,
they will be evaluated and used to control loading for the chart(s) they are
applied to.
Condition - The condition field holds one or more YAML paths (delimited by
commas). If this path exists in the top parent's values and resolves to a
boolean value, the chart will be enabled or disabled based on that boolean
value. Only the first valid path found in the list is evaluated and if no paths
exist then the condition has no effect.
Tags - The tags field is a YAML list of labels to associate with this chart. In
the top parent's values, all charts with tags can be enabled or disabled by
specifying the tag and a boolean value.
```yaml
# parentchart/Chart.yaml
dependencies:
- name: subchart1
repository: http://localhost:10191
version: 0.1.0
condition: subchart1.enabled,global.subchart1.enabled
tags:
- front-end
- subchart1
- name: subchart2
repository: http://localhost:10191
version: 0.1.0
condition: subchart2.enabled,global.subchart2.enabled
tags:
- back-end
- subchart2
```
```yaml
# parentchart/values.yaml
subchart1:
enabled: true
tags:
front-end: false
back-end: true
```
In the above example all charts with the tag `front-end` would be disabled but
since the `subchart1.enabled` path evaluates to 'true' in the parent's values,
the condition will override the `front-end` tag and `subchart1` will be enabled.
Since `subchart2` is tagged with `back-end` and that tag evaluates to `true`,
`subchart2` will be enabled. Also note that although `subchart2` has a condition
specified, there is no corresponding path and value in the parent's values so
that condition has no effect.
##### Using the CLI with Tags and Conditions
The `--set` parameter can be used as usual to alter tag and condition values.
```console
helm install --set tags.front-end=true --set subchart2.enabled=false
```
##### Tags and Condition Resolution
- **Conditions (when set in values) always override tags.** The first condition
path that exists wins and subsequent ones for that chart are ignored.
- Tags are evaluated as 'if any of the chart's tags are true then enable the
chart'.
- Tags and conditions values must be set in the top parent's values.
- The `tags:` key in values must be a top level key. Globals and nested `tags:`
tables are not currently supported.
#### Importing Child Values via dependencies
In some cases it is desirable to allow a child chart's values to propagate to
the parent chart and be shared as common defaults. An additional benefit of
using the `exports` format is that it will enable future tooling to introspect
user-settable values.
The keys containing the values to be imported can be specified in the parent
chart's `dependencies` in the field `import-values` using a YAML list. Each item
in the list is a key which is imported from the child chart's `exports` field.
To import values not contained in the `exports` key, use the
[child-parent](#using-the-child-parent-format) format. Examples of both formats
are described below.
##### Using the exports format
If a child chart's `values.yaml` file contains an `exports` field at the root,
its contents may be imported directly into the parent's values by specifying the
keys to import as in the example below:
```yaml
# parent's Chart.yaml file
dependencies:
- name: subchart
repository: http://localhost:10191
version: 0.1.0
import-values:
- data
```
```yaml
# child's values.yaml file
exports:
data:
myint: 99
```
Since we are specifying the key `data` in our import list, Helm looks in the
`exports` field of the child chart for `data` key and imports its contents.
The final parent values would contain our exported field:
```yaml
# parent's values
myint: 99
```
Please note the parent key `data` is not contained in the parent's final values.
If you need to specify the parent key, use the 'child-parent' format.
##### Using the child-parent format
To access values that are not contained in the `exports` key of the child
chart's values, you will need to specify the source key of the values to be
imported (`child`) and the destination path in the parent chart's values
(`parent`).
The `import-values` in the example below instructs Helm to take any values found
at `child:` path and copy them to the parent's values at the path specified in
`parent:`
```yaml
# parent's Chart.yaml file
dependencies:
- name: subchart1
repository: http://localhost:10191
version: 0.1.0
...
import-values:
- child: default.data
parent: myimports
```
In the above example, values found at `default.data` in the subchart1's values
will be imported to the `myimports` key in the parent chart's values as detailed
below:
```yaml
# parent's values.yaml file
myimports:
myint: 0
mybool: false
mystring: "helm rocks!"
```
```yaml
# subchart1's values.yaml file
default:
data:
myint: 999
mybool: true
```
The parent chart's resulting values would be:
```yaml
# parent's final values
myimports:
myint: 999
mybool: true
mystring: "helm rocks!"
```
The parent's final values now contains the `myint` and `mybool` fields imported
from subchart1.
### Managing Dependencies manually via the `charts/` directory
If more control over dependencies is desired, these dependencies can be
expressed explicitly by copying the dependency charts into the `charts/`
directory.
A dependency should be an unpacked chart directory but its name cannot start
with `_` or `.`. Such files are ignored by the chart loader.
For example, if the WordPress chart depends on the Apache chart, the Apache
chart (of the correct version) is supplied in the WordPress chart's `charts/`
directory:
```yaml
wordpress:
Chart.yaml
# ...
charts/
apache/
Chart.yaml
# ...
mysql/
Chart.yaml
# ...
```
The example above shows how the WordPress chart expresses its dependency on
Apache and MySQL by including those charts inside of its `charts/` directory.
**TIP:** _To drop a dependency into your `charts/` directory, use the `helm
pull` command_
### Operational aspects of using dependencies
The above sections explain how to specify chart dependencies, but how does this
affect chart installation using `helm install` and `helm upgrade`?
Suppose that a chart named "A" creates the following Kubernetes objects
- namespace "A-Namespace"
- statefulset "A-StatefulSet"
- service "A-Service"
Furthermore, A is dependent on chart B that creates objects
- namespace "B-Namespace"
- replicaset "B-ReplicaSet"
- service "B-Service"
After installation/upgrade of chart A a single Helm release is created/modified.
The release will create/update all of the above Kubernetes objects in the
following order:
- A-Namespace
- B-Namespace
- A-Service
- B-Service
- B-ReplicaSet
- A-StatefulSet
This is because when Helm installs/upgrades charts, the Kubernetes objects from
the charts and all its dependencies are
- aggregated into a single set; then
- sorted by type followed by name; and then
- created/updated in that order.
Hence a single release is created with all the objects for the chart and its
dependencies.
The install order of Kubernetes types is given by the enumeration InstallOrder
in kind_sorter.go (see [the Helm source
file](https://github.com/helm/helm/blob/484d43913f97292648c867b56768775a55e4bba6/pkg/releaseutil/kind_sorter.go)).
## Templates and Values
Helm Chart templates are written in the [Go template
language](https://golang.org/pkg/text/template/), with the addition of 50 or so
add-on template functions [from the Sprig
library](https://github.com/Masterminds/sprig) and a few other [specialized
functions]().
All template files are stored in a chart's `templates/` folder. When Helm
renders the charts, it will pass every file in that directory through the
template engine.
Values for the templates are supplied two ways:
- Chart developers may supply a file called `values.yaml` inside of a chart.
This file can contain default values.
- Chart users may supply a YAML file that contains values. This can be provided
on the command line with `helm install`.
When a user supplies custom values, these values will override the values in the
chart's `values.yaml` file.
### Template Files
Template files follow the standard conventions for writing Go templates (see
[the text/template Go package
documentation](https://golang.org/pkg/text/template/) for details). An example
template file might look something like this:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: deis-database
namespace: deis
labels:
app.kubernetes.io/managed-by: deis
spec:
replicas: 1
selector:
app.kubernetes.io/name: deis-database
template:
metadata:
labels:
app.kubernetes.io/name: deis-database
spec:
serviceAccount: deis-database
containers:
- name: deis-database
image: /postgres:
imagePullPolicy:
ports:
- containerPort: 5432
env:
- name: DATABASE_STORAGE
value:
```
The above example, based loosely on
[https://github.com/deis/charts](https://github.com/deis/charts), is a template
for a Kubernetes replication controller. It can use the following four template
values (usually defined in a `values.yaml` file):
- `imageRegistry`: The source registry for the Docker image.
- `dockerTag`: The tag for the docker image.
- `pullPolicy`: The Kubernetes pull policy.
- `storage`: The storage backend, whose default is set to `"minio"`
All of these values are defined by the template author. Helm does not require or
dictate parameters.
To see many working charts, check out the CNCF [Artifact
Hub](https://artifacthub.io/packages/search?kind=0).
### Predefined Values
Values that are supplied via a `values.yaml` file (or via the `--set` flag) are
accessible from the `.Values` object in a template. But there are other
pre-defined pieces of data you can access in your templates.
The following values are pre-defined, are available to every template, and
cannot be overridden. As with all values, the names are _case sensitive_.
- `Release.Name`: The name of the release (not the chart)
- `Release.Namespace`: The namespace the chart was released to.
- `Release.Service`: The service that conducted the release.
- `Release.IsUpgrade`: This is set to true if the current operation is an
upgrade or rollback.
- `Release.IsInstall`: This is set to true if the current operation is an
install.
- `Chart`: The contents of the `Chart.yaml`. Thus, the chart version is
obtainable as `Chart.Version` and the maintainers are in `Chart.Maintainers`.
- `Files`: A map-like object containing all non-special files in the chart. This
will not give you access to templates, but will give you access to additional
files that are present (unless they are excluded using `.helmignore`). Files
can be accessed using `` or using the
`` function. You can also access the contents of the file
as `[]byte` using ``
- `Capabilities`: A map-like object that contains information about the versions
of Kubernetes (``) and the supported Kubernetes
API versions (``)
**NOTE:** Any unknown `Chart.yaml` fields will be dropped. They will not be
accessible inside of the `Chart` object. Thus, `Chart.yaml` cannot be used to
pass arbitrarily structured data into the template. The values file can be used
for that, though.
### Values files
Considering the template in the previous section, a `values.yaml` file that
supplies the necessary values would look like this:
```yaml
imageRegistry: "quay.io/deis"
dockerTag: "latest"
pullPolicy: "Always"
storage: "s3"
```
A values file is formatted in YAML. A chart may include a default `values.yaml`
file. The Helm install command allows a user to override values by supplying
additional YAML values:
```console
$ helm install --generate-name --values=myvals.yaml wordpress
```
When values are passed in this way, they will be merged into the default values
file. For example, consider a `myvals.yaml` file that looks like this:
```yaml
storage: "gcs"
```
When this is merged with the `values.yaml` in the chart, the resulting generated
content will be:
```yaml
imageRegistry: "quay.io/deis"
dockerTag: "latest"
pullPolicy: "Always"
storage: "gcs"
```
Note that only the last field was overridden.
**NOTE:** The default values file included inside of a chart _must_ be named
`values.yaml`. But files specified on the command line can be named anything.
**NOTE:** If the `--set` flag is used on `helm install` or `helm upgrade`, those
values are simply converted to YAML on the client side.
**NOTE:** If any required entries in the values file exist, they can be declared
as required in the chart template by using the ['required' function]()
Any of these values are then accessible inside of templates using the `.Values`
object:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: deis-database
namespace: deis
labels:
app.kubernetes.io/managed-by: deis
spec:
replicas: 1
selector:
app.kubernetes.io/name: deis-database
template:
metadata:
labels:
app.kubernetes.io/name: deis-database
spec:
serviceAccount: deis-database
containers:
- name: deis-database
image: /postgres:
imagePullPolicy:
ports:
- containerPort: 5432
env:
- name: DATABASE_STORAGE
value:
```
### Scope, Dependencies, and Values
Values files can declare values for the top-level chart, as well as for any of
the charts that are included in that chart's `charts/` directory. Or, to phrase
it differently, a values file can supply values to the chart as well as to any
of its dependencies. For example, the demonstration WordPress chart above has
both `mysql` and `apache` as dependencies. The values file could supply values
to all of these components:
```yaml
title: "My WordPress Site" # Sent to the WordPress template
mysql:
max_connections: 100 # Sent to MySQL
password: "secret"
apache:
port: 8080 # Passed to Apache
```
Charts at a higher level have access to all of the variables defined beneath. So
the WordPress chart can access the MySQL password as `.Values.mysql.password`.
But lower level charts cannot access things in parent charts, so MySQL will not
be able to access the `title` property. Nor, for that matter, can it access
`apache.port`.
Values are namespaced, but namespaces are pruned. So for the WordPress chart, it
can access the MySQL password field as `.Values.mysql.password`. But for the
MySQL chart, the scope of the values has been reduced and the namespace prefix
removed, so it will see the password field simply as `.Values.password`.
#### Global Values
As of 2.0.0-Alpha.2, Helm supports special "global" value. Consider this
modified version of the previous example:
```yaml
title: "My WordPress Site" # Sent to the WordPress template
global:
app: MyWordPress
mysql:
max_connections: 100 # Sent to MySQL
password: "secret"
apache:
port: 8080 # Passed to Apache
```
The above adds a `global` section with the value `app: MyWordPress`. This value
is available to _all_ charts as `.Values.global.app`.
For example, the `mysql` templates may access `app` as ``, and so can the `apache` chart. Effectively, the values
file above is regenerated like this:
```yaml
title: "My WordPress Site" # Sent to the WordPress template
global:
app: MyWordPress
mysql:
global:
app: MyWordPress
max_connections: 100 # Sent to MySQL
password: "secret"
apache:
global:
app: MyWordPress
port: 8080 # Passed to Apache
```
This provides a way of sharing one top-level variable with all subcharts, which
is useful for things like setting `metadata` properties like labels.
If a subchart declares a global variable, that global will be passed _downward_
(to the subchart's subcharts), but not _upward_ to the parent chart. There is no
way for a subchart to influence the values of the parent chart.
Also, global variables of parent charts take precedence over the global
variables from subcharts.
### Schema Files
Sometimes, a chart maintainer might want to define a structure on their values.
This can be done by defining a schema in the `values.schema.json` file. A schema
is represented as a [JSON Schema](https://json-schema.org/). It might look
something like this:
```json
{
"$schema": "https://json-schema.org/draft-07/schema#",
"properties": {
"image": {
"description": "Container Image",
"properties": {
"repo": {
"type": "string"
},
"tag": {
"type": "string"
}
},
"type": "object"
},
"name": {
"description": "Service name",
"type": "string"
},
"port": {
"description": "Port",
"minimum": 0,
"type": "integer"
},
"protocol": {
"type": "string"
}
},
"required": [
"protocol",
"port"
],
"title": "Values",
"type": "object"
}
```
This schema will be applied to the values to validate it. Validation occurs when
any of the following commands are invoked:
- `helm install`
- `helm upgrade`
- `helm lint`
- `helm template`
An example of a `values.yaml` file that meets the requirements of this schema
might look something like this:
```yaml
name: frontend
protocol: https
port: 443
```
Note that the schema is applied to the final `.Values` object, and not just to
the `values.yaml` file. This means that the following `yaml` file is valid,
given that the chart is installed with the appropriate `--set` option shown
below.
```yaml
name: frontend
protocol: https
```
```console
helm install --set port=443
```
Furthermore, the final `.Values` object is checked against *all* subchart
schemas. This means that restrictions on a subchart can't be circumvented by a
parent chart. This also works backwards - if a subchart has a requirement that
is not met in the subchart's `values.yaml` file, the parent chart *must* satisfy
those restrictions in order to be valid.
### References
When it comes to writing templates, values, and schema files, there are several
standard references that will help you out.
- [Go templates](https://godoc.org/text/template)
- [Extra template functions](https://godoc.org/github.com/Masterminds/sprig)
- [The YAML format](https://yaml.org/spec/)
- [JSON Schema](https://json-schema.org/)
## Custom Resource Definitions (CRDs)
Kubernetes provides a mechanism for declaring new types of Kubernetes objects.
Using CustomResourceDefinitions (CRDs), Kubernetes developers can declare custom
resource types.
In Helm 3, CRDs are treated as a special kind of object. They are installed
before the rest of the chart, and are subject to some limitations.
CRD YAML files should be placed in the `crds/` directory inside of a chart.
Multiple CRDs (separated by YAML start and end markers) may be placed in the
same file. Helm will attempt to load _all_ of the files in the CRD directory
into Kubernetes.
CRD files _cannot be templated_. They must be plain YAML documents.
When Helm installs a new chart, it will upload the CRDs, pause until the CRDs
are made available by the API server, and then start the template engine, render
the rest of the chart, and upload it to Kubernetes. Because of this ordering,
CRD information is available in the `.Capabilities` object in Helm templates,
and Helm templates may create new instances of objects that were declared in
CRDs.
For example, if your chart had a CRD for `CronTab` in the `crds/` directory, you
may create instances of the `CronTab` kind in the `templates/` directory:
```text
crontabs/
Chart.yaml
crds/
crontab.yaml
templates/
mycrontab.yaml
```
The `crontab.yaml` file must contain the CRD with no template directives:
```yaml
kind: CustomResourceDefinition
metadata:
name: crontabs.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: crontabs
singular: crontab
kind: CronTab
```
Then the template `mycrontab.yaml` may create a new `CronTab` (using templates
as usual):
```yaml
apiVersion: stable.example.com
kind: CronTab
metadata:
name:
spec:
# ...
```
Helm will make sure that the `CronTab` kind has been installed and is available
from the Kubernetes API server before it proceeds installing the things in
`templates/`.
### Limitations on CRDs
Unlike most objects in Kubernetes, CRDs are installed globally. For that reason,
Helm takes a very cautious approach in managing CRDs. CRDs are subject to the
following limitations:
- CRDs are never reinstalled. If Helm determines that the CRDs in the `crds/`
directory are already present (regardless of version), Helm will not attempt
to install or upgrade.
- CRDs are never installed on upgrade or rollback. Helm will only create CRDs on
installation operations.
- CRDs are never deleted. Deleting a CRD automatically deletes all of the CRD's
contents across all namespaces in the cluster. Consequently, Helm will not
delete CRDs.
Operators who want to upgrade or delete CRDs are encouraged to do this manually
and with great care.
## Using Helm to Manage Charts
The `helm` tool has several commands for working with charts.
It can create a new chart for you:
```console
$ helm create mychart
Created mychart/
```
Once you have edited a chart, `helm` can package it into a chart archive for
you:
```console
$ helm package mychart
Archived mychart-0.1.-.tgz
```
You can also use `helm` to help you find issues with your chart's formatting or
information:
```console
$ helm lint mychart
No issues found
```
## Chart Repositories
A _chart repository_ is an HTTP server that houses one or more packaged charts.
While `helm` can be used to manage local chart directories, when it comes to
sharing charts, the preferred mechanism is a chart repository.
Any HTTP server that can serve YAML files and tar files and can answer GET
requests can be used as a repository server. The Helm team has tested some
servers, including Google Cloud Storage with website mode enabled, and S3 with
website mode enabled.
A repository is characterized primarily by the presence of a special file called
`index.yaml` that has a list of all of the packages supplied by the repository,
together with metadata that allows retrieving and verifying those packages.
On the client side, repositories are managed with the `helm repo` commands.
However, Helm does not provide tools for uploading charts to remote repository
servers. This is because doing so would add substantial requirements to an
implementing server, and thus raise the barrier for setting up a repository.
## Chart Starter Packs
The `helm create` command takes an optional `--starter` option that lets you
specify a "starter chart". Also, the starter option has a short alias `-p`.
Examples of usage:
```console
helm create my-chart --starter starter-name
helm create my-chart -p starter-name
helm create my-chart -p /absolute/path/to/starter-name
```
Starters are just regular charts, but are located in
`$XDG_DATA_HOME/helm/starters`. As a chart developer, you may author charts that
are specifically designed to be used as starters. Such charts should be designed
with the following considerations in mind:
- The `Chart.yaml` will be overwritten by the generator.
- Users will expect to modify such a chart's contents, so documentation should
indicate how users can do so.
- All occurrences of `<CHARTNAME>` will be replaced with the specified chart name so that starter charts can be used as templates, except for some variable files. For example, if you use custom files in the `vars` directory or certain `README.md` files, `<CHARTNAME>` will NOT override inside them. Additionally, the chart description is not inherited.
Currently the only way to add a chart to `$XDG_DATA_HOME/helm/starters` is to
manually copy it there. In your chart's documentation, you may want to explain
that process. | helm | title Charts description Explains the chart format and provides basic guidance for building charts with Helm aliases docs developing charts developing charts weight 1 Helm uses a packaging format called charts A chart is a collection of files that describe a related set of Kubernetes resources A single chart might be used to deploy something simple like a memcached pod or something complex like a full web app stack with HTTP servers databases caches and so on Charts are created as files laid out in a particular directory tree They can be packaged into versioned archives to be deployed If you want to download and look at the files for a published chart without installing it you can do so with helm pull chartrepo chartname This document explains the chart format and provides basic guidance for building charts with Helm The Chart File Structure A chart is organized as a collection of files inside of a directory The directory name is the name of the chart without versioning information Thus a chart describing WordPress would be stored in a wordpress directory Inside of this directory Helm will expect a structure that matches this text wordpress Chart yaml A YAML file containing information about the chart LICENSE OPTIONAL A plain text file containing the license for the chart README md OPTIONAL A human readable README file values yaml The default configuration values for this chart values schema json OPTIONAL A JSON Schema for imposing a structure on the values yaml file charts A directory containing any charts upon which this chart depends crds Custom Resource Definitions templates A directory of templates that when combined with values will generate valid Kubernetes manifest files templates NOTES txt OPTIONAL A plain text file containing short usage notes Helm reserves use of the charts crds and templates directories and of the listed file names Other files will be left as they are The Chart yaml File The Chart yaml file is required for a chart It contains the following fields yaml apiVersion The chart API version required name The name of the chart required version A SemVer 2 version required kubeVersion A SemVer range of compatible Kubernetes versions optional description A single sentence description of this project optional type The type of the chart optional keywords A list of keywords about this project optional home The URL of this projects home page optional sources A list of URLs to source code for this project optional dependencies A list of the chart requirements optional name The name of the chart nginx version The version of the chart 1 2 3 repository optional The repository URL https example com charts or alias repo name condition optional A yaml path that resolves to a boolean used for enabling disabling charts e g subchart1 enabled tags optional Tags can be used to group charts for enabling disabling together import values optional ImportValues holds the mapping of source values to parent key to be imported Each item can be a string or pair of child parent sublist items alias optional Alias to be used for the chart Useful when you have to add the same chart multiple times maintainers optional name The maintainers name required for each maintainer email The maintainers email optional for each maintainer url A URL for the maintainer optional for each maintainer icon A URL to an SVG or PNG image to be used as an icon optional appVersion The version of the app that this contains optional Needn t be SemVer Quotes recommended deprecated Whether this chart is deprecated optional boolean annotations example A list of annotations keyed by name optional As of v3 3 2 https github com helm helm releases tag v3 3 2 additional fields are not allowed The recommended approach is to add custom metadata in annotations Charts and Versioning Every chart must have a version number A version must follow the SemVer 2 https semver org spec v2 0 0 html standard Unlike Helm Classic Helm v2 and later uses version numbers as release markers Packages in repositories are identified by name plus version For example an nginx chart whose version field is set to version 1 2 3 will be named text nginx 1 2 3 tgz More complex SemVer 2 names are also supported such as version 1 2 3 alpha 1 ef365 But non SemVer names are explicitly disallowed by the system NOTE Whereas Helm Classic and Deployment Manager were both very GitHub oriented when it came to charts Helm v2 and later does not rely upon or require GitHub or even Git Consequently it does not use Git SHAs for versioning at all The version field inside of the Chart yaml is used by many of the Helm tools including the CLI When generating a package the helm package command will use the version that it finds in the Chart yaml as a token in the package name The system assumes that the version number in the chart package name matches the version number in the Chart yaml Failure to meet this assumption will cause an error The apiVersion Field The apiVersion field should be v2 for Helm charts that require at least Helm 3 Charts supporting previous Helm versions have an apiVersion set to v1 and are still installable by Helm 3 Changes from v1 to v2 A dependencies field defining chart dependencies which were located in a separate requirements yaml file for v1 charts see Chart Dependencies chart dependencies The type field discriminating application and library charts see Chart Types chart types The appVersion Field Note that the appVersion field is not related to the version field It is a way of specifying the version of the application For example the drupal chart may have an appVersion 8 2 1 indicating that the version of Drupal included in the chart by default is 8 2 1 This field is informational and has no impact on chart version calculations Wrapping the version in quotes is highly recommended It forces the YAML parser to treat the version number as a string Leaving it unquoted can lead to parsing issues in some cases For example YAML interprets 1 0 as a floating point value and a git commit SHA like 1234e10 as scientific notation As of Helm v3 5 0 helm create wraps the default appVersion field in quotes The kubeVersion Field The optional kubeVersion field can define semver constraints on supported Kubernetes versions Helm will validate the version constraints when installing the chart and fail if the cluster runs an unsupported Kubernetes version Version constraints may comprise space separated AND comparisons such as 1 13 0 1 15 0 which themselves can be combined with the OR operator like in the following example 1 13 0 1 14 0 1 14 1 1 15 0 In this example the version 1 14 0 is excluded which can make sense if a bug in certain versions is known to prevent the chart from running properly Apart from version constrains employing operators the following shorthand notations are supported hyphen ranges for closed intervals where 1 1 2 3 4 is equivalent to 1 1 2 3 4 wildcards x X and where 1 2 x is equivalent to 1 2 0 1 3 0 tilde ranges patch version changes allowed where 1 2 3 is equivalent to 1 2 3 1 3 0 caret ranges minor version changes allowed where 1 2 3 is equivalent to 1 2 3 2 0 0 For a detailed explanation of supported semver constraints see Masterminds semver https github com Masterminds semver Deprecating Charts When managing charts in a Chart Repository it is sometimes necessary to deprecate a chart The optional deprecated field in Chart yaml can be used to mark a chart as deprecated If the latest version of a chart in the repository is marked as deprecated then the chart as a whole is considered to be deprecated The chart name can be later reused by publishing a newer version that is not marked as deprecated The workflow for deprecating charts is 1 Update chart s Chart yaml to mark the chart as deprecated bumping the version 2 Release the new chart version in the Chart Repository 3 Remove the chart from the source repository e g git Chart Types The type field defines the type of chart There are two types application and library Application is the default type and it is the standard chart which can be operated on fully The library chart provides utilities or functions for the chart builder A library chart differs from an application chart because it is not installable and usually doesn t contain any resource objects Note An application chart can be used as a library chart This is enabled by setting the type to library The chart will then be rendered as a library chart where all utilities and functions can be leveraged All resource objects of the chart will not be rendered Chart LICENSE README and NOTES Charts can also contain files that describe the installation configuration usage and license of a chart A LICENSE is a plain text file containing the license https en wikipedia org wiki Software license for the chart The chart can contain a license as it may have programming logic in the templates and would therefore not be configuration only There can also be separate license s for the application installed by the chart if required A README for a chart should be formatted in Markdown README md and should generally contain A description of the application or service the chart provides Any prerequisites or requirements to run the chart Descriptions of options in values yaml and default values Any other information that may be relevant to the installation or configuration of the chart When hubs and other user interfaces display details about a chart that detail is pulled from the content in the README md file The chart can also contain a short plain text templates NOTES txt file that will be printed out after installation and when viewing the status of a release This file is evaluated as a template templates and values and can be used to display usage notes next steps or any other information relevant to a release of the chart For example instructions could be provided for connecting to a database or accessing a web UI Since this file is printed to STDOUT when running helm install or helm status it is recommended to keep the content brief and point to the README for greater detail Chart Dependencies In Helm one chart may depend on any number of other charts These dependencies can be dynamically linked using the dependencies field in Chart yaml or brought in to the charts directory and managed manually Managing Dependencies with the dependencies field The charts required by the current chart are defined as a list in the dependencies field yaml dependencies name apache version 1 2 3 repository https example com charts name mysql version 3 2 1 repository https another example com charts The name field is the name of the chart you want The version field is the version of the chart you want The repository field is the full URL to the chart repository Note that you must also use helm repo add to add that repo locally You might use the name of the repo instead of URL console helm repo add fantastic charts https charts helm sh incubator yaml dependencies name awesomeness version 1 0 0 repository fantastic charts Once you have defined dependencies you can run helm dependency update and it will use your dependency file to download all the specified charts into your charts directory for you console helm dep up foochart Hang tight while we grab the latest from your chart repositories Successfully got an update from the local chart repository Successfully got an update from the stable chart repository Successfully got an update from the example chart repository Successfully got an update from the another chart repository Update Complete Happy Helming Saving 2 charts Downloading apache from repo https example com charts Downloading mysql from repo https another example com charts When helm dependency update retrieves charts it will store them as chart archives in the charts directory So for the example above one would expect to see the following files in the charts directory text charts apache 1 2 3 tgz mysql 3 2 1 tgz Alias field in dependencies In addition to the other fields above each requirements entry may contain the optional field alias Adding an alias for a dependency chart would put a chart in dependencies using alias as name of new dependency One can use alias in cases where they need to access a chart with other name s yaml parentchart Chart yaml dependencies name subchart repository http localhost 10191 version 0 1 0 alias new subchart 1 name subchart repository http localhost 10191 version 0 1 0 alias new subchart 2 name subchart repository http localhost 10191 version 0 1 0 In the above example we will get 3 dependencies in all for parentchart text subchart new subchart 1 new subchart 2 The manual way of achieving this is by copy pasting the same chart in the charts directory multiple times with different names Tags and Condition fields in dependencies In addition to the other fields above each requirements entry may contain the optional fields tags and condition All charts are loaded by default If tags or condition fields are present they will be evaluated and used to control loading for the chart s they are applied to Condition The condition field holds one or more YAML paths delimited by commas If this path exists in the top parent s values and resolves to a boolean value the chart will be enabled or disabled based on that boolean value Only the first valid path found in the list is evaluated and if no paths exist then the condition has no effect Tags The tags field is a YAML list of labels to associate with this chart In the top parent s values all charts with tags can be enabled or disabled by specifying the tag and a boolean value yaml parentchart Chart yaml dependencies name subchart1 repository http localhost 10191 version 0 1 0 condition subchart1 enabled global subchart1 enabled tags front end subchart1 name subchart2 repository http localhost 10191 version 0 1 0 condition subchart2 enabled global subchart2 enabled tags back end subchart2 yaml parentchart values yaml subchart1 enabled true tags front end false back end true In the above example all charts with the tag front end would be disabled but since the subchart1 enabled path evaluates to true in the parent s values the condition will override the front end tag and subchart1 will be enabled Since subchart2 is tagged with back end and that tag evaluates to true subchart2 will be enabled Also note that although subchart2 has a condition specified there is no corresponding path and value in the parent s values so that condition has no effect Using the CLI with Tags and Conditions The set parameter can be used as usual to alter tag and condition values console helm install set tags front end true set subchart2 enabled false Tags and Condition Resolution Conditions when set in values always override tags The first condition path that exists wins and subsequent ones for that chart are ignored Tags are evaluated as if any of the chart s tags are true then enable the chart Tags and conditions values must be set in the top parent s values The tags key in values must be a top level key Globals and nested tags tables are not currently supported Importing Child Values via dependencies In some cases it is desirable to allow a child chart s values to propagate to the parent chart and be shared as common defaults An additional benefit of using the exports format is that it will enable future tooling to introspect user settable values The keys containing the values to be imported can be specified in the parent chart s dependencies in the field import values using a YAML list Each item in the list is a key which is imported from the child chart s exports field To import values not contained in the exports key use the child parent using the child parent format format Examples of both formats are described below Using the exports format If a child chart s values yaml file contains an exports field at the root its contents may be imported directly into the parent s values by specifying the keys to import as in the example below yaml parent s Chart yaml file dependencies name subchart repository http localhost 10191 version 0 1 0 import values data yaml child s values yaml file exports data myint 99 Since we are specifying the key data in our import list Helm looks in the exports field of the child chart for data key and imports its contents The final parent values would contain our exported field yaml parent s values myint 99 Please note the parent key data is not contained in the parent s final values If you need to specify the parent key use the child parent format Using the child parent format To access values that are not contained in the exports key of the child chart s values you will need to specify the source key of the values to be imported child and the destination path in the parent chart s values parent The import values in the example below instructs Helm to take any values found at child path and copy them to the parent s values at the path specified in parent yaml parent s Chart yaml file dependencies name subchart1 repository http localhost 10191 version 0 1 0 import values child default data parent myimports In the above example values found at default data in the subchart1 s values will be imported to the myimports key in the parent chart s values as detailed below yaml parent s values yaml file myimports myint 0 mybool false mystring helm rocks yaml subchart1 s values yaml file default data myint 999 mybool true The parent chart s resulting values would be yaml parent s final values myimports myint 999 mybool true mystring helm rocks The parent s final values now contains the myint and mybool fields imported from subchart1 Managing Dependencies manually via the charts directory If more control over dependencies is desired these dependencies can be expressed explicitly by copying the dependency charts into the charts directory A dependency should be an unpacked chart directory but its name cannot start with or Such files are ignored by the chart loader For example if the WordPress chart depends on the Apache chart the Apache chart of the correct version is supplied in the WordPress chart s charts directory yaml wordpress Chart yaml charts apache Chart yaml mysql Chart yaml The example above shows how the WordPress chart expresses its dependency on Apache and MySQL by including those charts inside of its charts directory TIP To drop a dependency into your charts directory use the helm pull command Operational aspects of using dependencies The above sections explain how to specify chart dependencies but how does this affect chart installation using helm install and helm upgrade Suppose that a chart named A creates the following Kubernetes objects namespace A Namespace statefulset A StatefulSet service A Service Furthermore A is dependent on chart B that creates objects namespace B Namespace replicaset B ReplicaSet service B Service After installation upgrade of chart A a single Helm release is created modified The release will create update all of the above Kubernetes objects in the following order A Namespace B Namespace A Service B Service B ReplicaSet A StatefulSet This is because when Helm installs upgrades charts the Kubernetes objects from the charts and all its dependencies are aggregated into a single set then sorted by type followed by name and then created updated in that order Hence a single release is created with all the objects for the chart and its dependencies The install order of Kubernetes types is given by the enumeration InstallOrder in kind sorter go see the Helm source file https github com helm helm blob 484d43913f97292648c867b56768775a55e4bba6 pkg releaseutil kind sorter go Templates and Values Helm Chart templates are written in the Go template language https golang org pkg text template with the addition of 50 or so add on template functions from the Sprig library https github com Masterminds sprig and a few other specialized functions All template files are stored in a chart s templates folder When Helm renders the charts it will pass every file in that directory through the template engine Values for the templates are supplied two ways Chart developers may supply a file called values yaml inside of a chart This file can contain default values Chart users may supply a YAML file that contains values This can be provided on the command line with helm install When a user supplies custom values these values will override the values in the chart s values yaml file Template Files Template files follow the standard conventions for writing Go templates see the text template Go package documentation https golang org pkg text template for details An example template file might look something like this yaml apiVersion v1 kind ReplicationController metadata name deis database namespace deis labels app kubernetes io managed by deis spec replicas 1 selector app kubernetes io name deis database template metadata labels app kubernetes io name deis database spec serviceAccount deis database containers name deis database image postgres imagePullPolicy ports containerPort 5432 env name DATABASE STORAGE value The above example based loosely on https github com deis charts https github com deis charts is a template for a Kubernetes replication controller It can use the following four template values usually defined in a values yaml file imageRegistry The source registry for the Docker image dockerTag The tag for the docker image pullPolicy The Kubernetes pull policy storage The storage backend whose default is set to minio All of these values are defined by the template author Helm does not require or dictate parameters To see many working charts check out the CNCF Artifact Hub https artifacthub io packages search kind 0 Predefined Values Values that are supplied via a values yaml file or via the set flag are accessible from the Values object in a template But there are other pre defined pieces of data you can access in your templates The following values are pre defined are available to every template and cannot be overridden As with all values the names are case sensitive Release Name The name of the release not the chart Release Namespace The namespace the chart was released to Release Service The service that conducted the release Release IsUpgrade This is set to true if the current operation is an upgrade or rollback Release IsInstall This is set to true if the current operation is an install Chart The contents of the Chart yaml Thus the chart version is obtainable as Chart Version and the maintainers are in Chart Maintainers Files A map like object containing all non special files in the chart This will not give you access to templates but will give you access to additional files that are present unless they are excluded using helmignore Files can be accessed using or using the function You can also access the contents of the file as byte using Capabilities A map like object that contains information about the versions of Kubernetes and the supported Kubernetes API versions NOTE Any unknown Chart yaml fields will be dropped They will not be accessible inside of the Chart object Thus Chart yaml cannot be used to pass arbitrarily structured data into the template The values file can be used for that though Values files Considering the template in the previous section a values yaml file that supplies the necessary values would look like this yaml imageRegistry quay io deis dockerTag latest pullPolicy Always storage s3 A values file is formatted in YAML A chart may include a default values yaml file The Helm install command allows a user to override values by supplying additional YAML values console helm install generate name values myvals yaml wordpress When values are passed in this way they will be merged into the default values file For example consider a myvals yaml file that looks like this yaml storage gcs When this is merged with the values yaml in the chart the resulting generated content will be yaml imageRegistry quay io deis dockerTag latest pullPolicy Always storage gcs Note that only the last field was overridden NOTE The default values file included inside of a chart must be named values yaml But files specified on the command line can be named anything NOTE If the set flag is used on helm install or helm upgrade those values are simply converted to YAML on the client side NOTE If any required entries in the values file exist they can be declared as required in the chart template by using the required function Any of these values are then accessible inside of templates using the Values object yaml apiVersion v1 kind ReplicationController metadata name deis database namespace deis labels app kubernetes io managed by deis spec replicas 1 selector app kubernetes io name deis database template metadata labels app kubernetes io name deis database spec serviceAccount deis database containers name deis database image postgres imagePullPolicy ports containerPort 5432 env name DATABASE STORAGE value Scope Dependencies and Values Values files can declare values for the top level chart as well as for any of the charts that are included in that chart s charts directory Or to phrase it differently a values file can supply values to the chart as well as to any of its dependencies For example the demonstration WordPress chart above has both mysql and apache as dependencies The values file could supply values to all of these components yaml title My WordPress Site Sent to the WordPress template mysql max connections 100 Sent to MySQL password secret apache port 8080 Passed to Apache Charts at a higher level have access to all of the variables defined beneath So the WordPress chart can access the MySQL password as Values mysql password But lower level charts cannot access things in parent charts so MySQL will not be able to access the title property Nor for that matter can it access apache port Values are namespaced but namespaces are pruned So for the WordPress chart it can access the MySQL password field as Values mysql password But for the MySQL chart the scope of the values has been reduced and the namespace prefix removed so it will see the password field simply as Values password Global Values As of 2 0 0 Alpha 2 Helm supports special global value Consider this modified version of the previous example yaml title My WordPress Site Sent to the WordPress template global app MyWordPress mysql max connections 100 Sent to MySQL password secret apache port 8080 Passed to Apache The above adds a global section with the value app MyWordPress This value is available to all charts as Values global app For example the mysql templates may access app as and so can the apache chart Effectively the values file above is regenerated like this yaml title My WordPress Site Sent to the WordPress template global app MyWordPress mysql global app MyWordPress max connections 100 Sent to MySQL password secret apache global app MyWordPress port 8080 Passed to Apache This provides a way of sharing one top level variable with all subcharts which is useful for things like setting metadata properties like labels If a subchart declares a global variable that global will be passed downward to the subchart s subcharts but not upward to the parent chart There is no way for a subchart to influence the values of the parent chart Also global variables of parent charts take precedence over the global variables from subcharts Schema Files Sometimes a chart maintainer might want to define a structure on their values This can be done by defining a schema in the values schema json file A schema is represented as a JSON Schema https json schema org It might look something like this json schema https json schema org draft 07 schema properties image description Container Image properties repo type string tag type string type object name description Service name type string port description Port minimum 0 type integer protocol type string required protocol port title Values type object This schema will be applied to the values to validate it Validation occurs when any of the following commands are invoked helm install helm upgrade helm lint helm template An example of a values yaml file that meets the requirements of this schema might look something like this yaml name frontend protocol https port 443 Note that the schema is applied to the final Values object and not just to the values yaml file This means that the following yaml file is valid given that the chart is installed with the appropriate set option shown below yaml name frontend protocol https console helm install set port 443 Furthermore the final Values object is checked against all subchart schemas This means that restrictions on a subchart can t be circumvented by a parent chart This also works backwards if a subchart has a requirement that is not met in the subchart s values yaml file the parent chart must satisfy those restrictions in order to be valid References When it comes to writing templates values and schema files there are several standard references that will help you out Go templates https godoc org text template Extra template functions https godoc org github com Masterminds sprig The YAML format https yaml org spec JSON Schema https json schema org Custom Resource Definitions CRDs Kubernetes provides a mechanism for declaring new types of Kubernetes objects Using CustomResourceDefinitions CRDs Kubernetes developers can declare custom resource types In Helm 3 CRDs are treated as a special kind of object They are installed before the rest of the chart and are subject to some limitations CRD YAML files should be placed in the crds directory inside of a chart Multiple CRDs separated by YAML start and end markers may be placed in the same file Helm will attempt to load all of the files in the CRD directory into Kubernetes CRD files cannot be templated They must be plain YAML documents When Helm installs a new chart it will upload the CRDs pause until the CRDs are made available by the API server and then start the template engine render the rest of the chart and upload it to Kubernetes Because of this ordering CRD information is available in the Capabilities object in Helm templates and Helm templates may create new instances of objects that were declared in CRDs For example if your chart had a CRD for CronTab in the crds directory you may create instances of the CronTab kind in the templates directory text crontabs Chart yaml crds crontab yaml templates mycrontab yaml The crontab yaml file must contain the CRD with no template directives yaml kind CustomResourceDefinition metadata name crontabs stable example com spec group stable example com versions name v1 served true storage true scope Namespaced names plural crontabs singular crontab kind CronTab Then the template mycrontab yaml may create a new CronTab using templates as usual yaml apiVersion stable example com kind CronTab metadata name spec Helm will make sure that the CronTab kind has been installed and is available from the Kubernetes API server before it proceeds installing the things in templates Limitations on CRDs Unlike most objects in Kubernetes CRDs are installed globally For that reason Helm takes a very cautious approach in managing CRDs CRDs are subject to the following limitations CRDs are never reinstalled If Helm determines that the CRDs in the crds directory are already present regardless of version Helm will not attempt to install or upgrade CRDs are never installed on upgrade or rollback Helm will only create CRDs on installation operations CRDs are never deleted Deleting a CRD automatically deletes all of the CRD s contents across all namespaces in the cluster Consequently Helm will not delete CRDs Operators who want to upgrade or delete CRDs are encouraged to do this manually and with great care Using Helm to Manage Charts The helm tool has several commands for working with charts It can create a new chart for you console helm create mychart Created mychart Once you have edited a chart helm can package it into a chart archive for you console helm package mychart Archived mychart 0 1 tgz You can also use helm to help you find issues with your chart s formatting or information console helm lint mychart No issues found Chart Repositories A chart repository is an HTTP server that houses one or more packaged charts While helm can be used to manage local chart directories when it comes to sharing charts the preferred mechanism is a chart repository Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server The Helm team has tested some servers including Google Cloud Storage with website mode enabled and S3 with website mode enabled A repository is characterized primarily by the presence of a special file called index yaml that has a list of all of the packages supplied by the repository together with metadata that allows retrieving and verifying those packages On the client side repositories are managed with the helm repo commands However Helm does not provide tools for uploading charts to remote repository servers This is because doing so would add substantial requirements to an implementing server and thus raise the barrier for setting up a repository Chart Starter Packs The helm create command takes an optional starter option that lets you specify a starter chart Also the starter option has a short alias p Examples of usage console helm create my chart starter starter name helm create my chart p starter name helm create my chart p absolute path to starter name Starters are just regular charts but are located in XDG DATA HOME helm starters As a chart developer you may author charts that are specifically designed to be used as starters Such charts should be designed with the following considerations in mind The Chart yaml will be overwritten by the generator Users will expect to modify such a chart s contents so documentation should indicate how users can do so All occurrences of CHARTNAME will be replaced with the specified chart name so that starter charts can be used as templates except for some variable files For example if you use custom files in the vars directory or certain README md files CHARTNAME will NOT override inside them Additionally the chart description is not inherited Currently the only way to add a chart to XDG DATA HOME helm starters is to manually copy it there In your chart s documentation you may want to explain that process |
helm Explains various advanced features for Helm power users aliases docs advancedhelmtechniques The information in this section is intended for power users of Helm that wish title Advanced Helm Techniques This section explains various advanced features and techniques for using Helm to do advanced customization and manipulation of their charts and releases Each weight 9 | ---
title: "Advanced Helm Techniques"
description: "Explains various advanced features for Helm power users"
aliases: ["/docs/advanced_helm_techniques"]
weight: 9
---
This section explains various advanced features and techniques for using Helm.
The information in this section is intended for "power users" of Helm that wish
to do advanced customization and manipulation of their charts and releases. Each
of these advanced features comes with their own tradeoffs and caveats, so each
one must be used carefully and with deep knowledge of Helm. Or in other words,
remember the [Peter Parker
principle](https://en.wikipedia.org/wiki/With_great_power_comes_great_responsibility)
## Post Rendering
Post rendering gives chart installers the ability to manually manipulate,
configure, and/or validate rendered manifests before they are installed by Helm.
This allows users with advanced configuration needs to be able to use tools like
[`kustomize`](https://kustomize.io) to apply configuration changes without the
need to fork a public chart or requiring chart maintainers to specify every last
configuration option for a piece of software. There are also use cases for
injecting common tools and side cars in enterprise environments or analysis of
the manifests before deployment.
### Prerequisites
- Helm 3.1+
### Usage
A post-renderer can be any executable that accepts rendered Kubernetes manifests
on STDIN and returns valid Kubernetes manifests on STDOUT. It should return an
non-0 exit code in the event of a failure. This is the only "API" between the
two components. It allows for great flexibility in what you can do with your
post-render process.
A post renderer can be used with `install`, `upgrade`, and `template`. To use a
post-renderer, use the `--post-renderer` flag with a path to the renderer
executable you wish to use:
```shell
$ helm install mychart stable/wordpress --post-renderer ./path/to/executable
```
If the path does not contain any separators, it will search in $PATH, otherwise
it will resolve any relative paths to a fully qualified path
If you wish to use multiple post-renderers, call all of them in a script or
together in whatever binary tool you have built. In bash, this would be as
simple as `renderer1 | renderer2 | renderer3`.
You can see an example of using `kustomize` as a post renderer
[here](https://github.com/thomastaylor312/advanced-helm-demos/tree/master/post-render).
### Caveats
When using post renderers, there are several important things to keep in mind.
The most important of these is that when using a post-renderer, all people
modifying that release **MUST** use the same renderer in order to have
repeatable builds. This feature is purposefully built to allow any user to
switch out which renderer they are using or to stop using a renderer, but this
should be done deliberately to avoid accidental modification or data loss.
One other important note is around security. If you are using a post-renderer,
you should ensure it is coming from a reliable source (as is the case for any
other arbitrary executable). Using non-trusted or non-verified renderers is NOT
recommended as they have full access to rendered templates, which often contain
secret data.
### Custom Post Renderers
The post render step offers even more flexibility when used in the Go SDK. Any
post renderer only needs to implement the following Go interface:
```go
type PostRenderer interface {
// Run expects a single buffer filled with Helm rendered manifests. It
// expects the modified results to be returned on a separate buffer or an
// error if there was an issue or failure while running the post render step
Run(renderedManifests *bytes.Buffer) (modifiedManifests *bytes.Buffer, err error)
}
```
For more information on using the Go SDK, See the [Go SDK section](#go-sdk)
## Go SDK
Helm 3 debuted a completely restructured Go SDK for a better experience when
building software and tools that leverage Helm. Full documentation can be found
in the [Go SDK Section](../sdk/gosdk.md).
## Storage backends
Helm 3 changed the default release information storage to Secrets in the
namespace of the release. Helm 2 by default stores release information as
ConfigMaps in the namespace of the Tiller instance. The subsections which follow
show how to configure different backends. This configuration is based on the
`HELM_DRIVER` environment variable. It can be set to one of the values:
`[configmap, secret, sql]`.
### ConfigMap storage backend
To enable the ConfigMap backend, you'll need to set the environmental variable
`HELM_DRIVER` to `configmap`.
You can set it in a shell as follows:
```shell
export HELM_DRIVER=configmap
```
If you want to switch from the default backend to the ConfigMap backend, you'll
have to do the migration for this on your own. You can retrieve release
information with the following command:
```shell
kubectl get secret --all-namespaces -l "owner=helm"
```
**PRODUCTION NOTES**: The release information includes the contents of charts and
values files, and therefore might contain sensitive data (like
passwords, private keys, and other credentials) that needs to be protected from
unauthorized access. When managing Kubernetes authorization, for instance with
[RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), it is
possible to grant broader access to ConfigMap resources, while restricting
access to Secret resources. For instance, the default [user-facing
role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles)
"view" grants access to most resources, but not to Secrets. Furthermore, secrets
data can be configured for [encrypted
storage](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/).
Please keep that in mind if you decide to switch to the ConfigMap backend, as it
could expose your application's sensitive data.
### SQL storage backend
There is a ***beta*** SQL storage backend that stores release information in an SQL
database.
Using such a storage backend is particularly useful if your release information
weighs more than 1MB (in which case, it can't be stored in ConfigMaps/Secrets
because of internal limits in Kubernetes' underlying etcd key-value store).
To enable the SQL backend, you'll need to deploy a SQL database and set the
environmental variable `HELM_DRIVER` to `sql`. The DB details are set with the
environmental variable `HELM_DRIVER_SQL_CONNECTION_STRING`.
You can set it in a shell as follows:
```shell
export HELM_DRIVER=sql
export HELM_DRIVER_SQL_CONNECTION_STRING=postgresql://helm-postgres:5432/helm?user=helm&password=changeme
```
> Note: Only PostgreSQL is supported at this moment.
**PRODUCTION NOTES**: It is recommended to:
- Make your database production ready. For PostgreSQL, refer to the [Server Administration](https://www.postgresql.org/docs/12/admin.html) docs for more details
- Enable [permission management](/docs/permissions_sql_storage_backend/) to
mirror Kubernetes RBAC for release information
If you want to switch from the default backend to the SQL backend, you'll have
to do the migration for this on your own. You can retrieve release information
with the following command:
```shell
kubectl get secret --all-namespaces -l "owner=helm"
``` | helm | title Advanced Helm Techniques description Explains various advanced features for Helm power users aliases docs advanced helm techniques weight 9 This section explains various advanced features and techniques for using Helm The information in this section is intended for power users of Helm that wish to do advanced customization and manipulation of their charts and releases Each of these advanced features comes with their own tradeoffs and caveats so each one must be used carefully and with deep knowledge of Helm Or in other words remember the Peter Parker principle https en wikipedia org wiki With great power comes great responsibility Post Rendering Post rendering gives chart installers the ability to manually manipulate configure and or validate rendered manifests before they are installed by Helm This allows users with advanced configuration needs to be able to use tools like kustomize https kustomize io to apply configuration changes without the need to fork a public chart or requiring chart maintainers to specify every last configuration option for a piece of software There are also use cases for injecting common tools and side cars in enterprise environments or analysis of the manifests before deployment Prerequisites Helm 3 1 Usage A post renderer can be any executable that accepts rendered Kubernetes manifests on STDIN and returns valid Kubernetes manifests on STDOUT It should return an non 0 exit code in the event of a failure This is the only API between the two components It allows for great flexibility in what you can do with your post render process A post renderer can be used with install upgrade and template To use a post renderer use the post renderer flag with a path to the renderer executable you wish to use shell helm install mychart stable wordpress post renderer path to executable If the path does not contain any separators it will search in PATH otherwise it will resolve any relative paths to a fully qualified path If you wish to use multiple post renderers call all of them in a script or together in whatever binary tool you have built In bash this would be as simple as renderer1 renderer2 renderer3 You can see an example of using kustomize as a post renderer here https github com thomastaylor312 advanced helm demos tree master post render Caveats When using post renderers there are several important things to keep in mind The most important of these is that when using a post renderer all people modifying that release MUST use the same renderer in order to have repeatable builds This feature is purposefully built to allow any user to switch out which renderer they are using or to stop using a renderer but this should be done deliberately to avoid accidental modification or data loss One other important note is around security If you are using a post renderer you should ensure it is coming from a reliable source as is the case for any other arbitrary executable Using non trusted or non verified renderers is NOT recommended as they have full access to rendered templates which often contain secret data Custom Post Renderers The post render step offers even more flexibility when used in the Go SDK Any post renderer only needs to implement the following Go interface go type PostRenderer interface Run expects a single buffer filled with Helm rendered manifests It expects the modified results to be returned on a separate buffer or an error if there was an issue or failure while running the post render step Run renderedManifests bytes Buffer modifiedManifests bytes Buffer err error For more information on using the Go SDK See the Go SDK section go sdk Go SDK Helm 3 debuted a completely restructured Go SDK for a better experience when building software and tools that leverage Helm Full documentation can be found in the Go SDK Section sdk gosdk md Storage backends Helm 3 changed the default release information storage to Secrets in the namespace of the release Helm 2 by default stores release information as ConfigMaps in the namespace of the Tiller instance The subsections which follow show how to configure different backends This configuration is based on the HELM DRIVER environment variable It can be set to one of the values configmap secret sql ConfigMap storage backend To enable the ConfigMap backend you ll need to set the environmental variable HELM DRIVER to configmap You can set it in a shell as follows shell export HELM DRIVER configmap If you want to switch from the default backend to the ConfigMap backend you ll have to do the migration for this on your own You can retrieve release information with the following command shell kubectl get secret all namespaces l owner helm PRODUCTION NOTES The release information includes the contents of charts and values files and therefore might contain sensitive data like passwords private keys and other credentials that needs to be protected from unauthorized access When managing Kubernetes authorization for instance with RBAC https kubernetes io docs reference access authn authz rbac it is possible to grant broader access to ConfigMap resources while restricting access to Secret resources For instance the default user facing role https kubernetes io docs reference access authn authz rbac user facing roles view grants access to most resources but not to Secrets Furthermore secrets data can be configured for encrypted storage https kubernetes io docs tasks administer cluster encrypt data Please keep that in mind if you decide to switch to the ConfigMap backend as it could expose your application s sensitive data SQL storage backend There is a beta SQL storage backend that stores release information in an SQL database Using such a storage backend is particularly useful if your release information weighs more than 1MB in which case it can t be stored in ConfigMaps Secrets because of internal limits in Kubernetes underlying etcd key value store To enable the SQL backend you ll need to deploy a SQL database and set the environmental variable HELM DRIVER to sql The DB details are set with the environmental variable HELM DRIVER SQL CONNECTION STRING You can set it in a shell as follows shell export HELM DRIVER sql export HELM DRIVER SQL CONNECTION STRING postgresql helm postgres 5432 helm user helm password changeme Note Only PostgreSQL is supported at this moment PRODUCTION NOTES It is recommended to Make your database production ready For PostgreSQL refer to the Server Administration https www postgresql org docs 12 admin html docs for more details Enable permission management docs permissions sql storage backend to mirror Kubernetes RBAC for release information If you want to switch from the default backend to the SQL backend you ll have to do the migration for this on your own You can retrieve release information with the following command shell kubectl get secret all namespaces l owner helm |
helm aliases docs plugins A Helm plugin is a tool that can be accessed through the CLI but which Introduces how to use and create plugins to extend Helm s functionality is not part of the built in Helm codebase title The Helm Plugins Guide weight 12 | ---
title: "The Helm Plugins Guide"
description: "Introduces how to use and create plugins to extend Helm's functionality."
aliases: ["/docs/plugins/"]
weight: 12
---
A Helm plugin is a tool that can be accessed through the `helm` CLI, but which
is not part of the built-in Helm codebase.
Existing plugins can be found on [related]() section or by searching
[GitHub](https://github.com/search?q=topic%3Ahelm-plugin&type=Repositories).
This guide explains how to use and create plugins.
## An Overview
Helm plugins are add-on tools that integrate seamlessly with Helm. They provide
a way to extend the core feature set of Helm, but without requiring every new
feature to be written in Go and added to the core tool.
Helm plugins have the following features:
- They can be added and removed from a Helm installation without impacting the
core Helm tool.
- They can be written in any programming language.
- They integrate with Helm, and will show up in `helm help` and other places.
Helm plugins live in `$HELM_PLUGINS`. You can find the current value of this,
including the default value when not set in the environment, using the
`helm env` command.
The Helm plugin model is partially modeled on Git's plugin model. To that end,
you may sometimes hear `helm` referred to as the _porcelain_ layer, with plugins
being the _plumbing_. This is a shorthand way of suggesting that Helm provides
the user experience and top level processing logic, while the plugins do the
"detail work" of performing a desired action.
## Installing a Plugin
Plugins are installed using the `$ helm plugin install <path|url>` command. You
can pass in a path to a plugin on your local file system or a url of a remote
VCS repo. The `helm plugin install` command clones or copies the plugin at the
path/url given into `$HELM_PLUGINS`
```console
$ helm plugin install https://github.com/adamreese/helm-env
```
If you have a plugin tar distribution, simply untar the plugin into the
`$HELM_PLUGINS` directory. You can also install tarball plugins
directly from url by issuing `helm plugin install
https://domain/path/to/plugin.tar.gz`
## Testing a locally built Plugin
First you need to find your `HELM_PLUGINS` path to do it run the folowing command:
``` bash
helm env
```
Change your current directory to the director that `HELM_PLUGINS` is set to.
Now you can add a symbolic link to your build out put of your plugin in this example we did it for `mapkubeapis`.
``` bash
ln -s ~/GitHub/helm-mapkubeapis ./helm-mapkubeapis
```
## Building Plugins
In many ways, a plugin is similar to a chart. Each plugin has a top-level
directory, and then a `plugin.yaml` file.
```
$HELM_PLUGINS/
|- last/
|
|- plugin.yaml
|- last.sh
```
In the example above, the `last` plugin is contained inside of a directory
named `last`. It has two files: `plugin.yaml` (required) and an executable
script, `last.sh` (optional).
The core of a plugin is a simple YAML file named `plugin.yaml`. Here is a plugin
YAML for a plugin that helps get the last release name:
```yaml
name: "last"
version: "0.1.0"
usage: "get the last release name"
description: "get the last release name"
ignoreFlags: false
command: "$HELM_BIN --host $TILLER_HOST list --short --max 1 --date -r"
platformCommand:
- os: linux
arch: i386
command: "$HELM_BIN list --short --max 1 --date -r"
- os: linux
arch: amd64
command: "$HELM_BIN list --short --max 1 --date -r"
- os: windows
arch: amd64
command: "$HELM_BIN list --short --max 1 --date -r"
```
The `name` is the name of the plugin. When Helm executes this plugin, this is
the name it will use (e.g. `helm NAME` will invoke this plugin).
_`name` should match the directory name._ In our example above, that means the
plugin with `name: last` should be contained in a directory named `last`.
Restrictions on `name`:
- `name` cannot duplicate one of the existing `helm` top-level commands.
- `name` must be restricted to the characters ASCII a-z, A-Z, 0-9, `_` and `-`.
`version` is the SemVer 2 version of the plugin. `usage` and `description` are
both used to generate the help text of a command.
The `ignoreFlags` switch tells Helm to _not_ pass flags to the plugin. So if a
plugin is called with `helm myplugin --foo` and `ignoreFlags: true`, then
`--foo` is silently discarded.
Finally, and most importantly, `platformCommand` or `command` is the command
that this plugin will execute when it is called. The `platformCommand` section
defines the OS/Architecture specific variations of a command. The following
rules will apply in deciding which command to use:
- If `platformCommand` is present, it will be searched first.
- If both `os` and `arch` match the current platform, search will stop and the
command will be used.
- If `os` matches and there is no more specific `arch` match, the command will
be used.
- If no `platformCommand` match is found, the default `command` will be used.
- If no matches are found in `platformCommand` and no `command` is present, Helm
will exit with an error.
Environment variables are interpolated before the plugin is executed. The
pattern above illustrates the preferred way to indicate where the plugin program
lives.
There are some strategies for working with plugin commands:
- If a plugin includes an executable, the executable for a `platformCommand:` or
a `command:` should be packaged in the plugin directory.
- The `platformCommand:` or `command:` line will have any environment variables
expanded before execution. `$HELM_PLUGIN_DIR` will point to the plugin
directory.
- The command itself is not executed in a shell. So you can't oneline a shell
script.
- Helm injects lots of configuration into environment variables. Take a look at
the environment to see what information is available.
- Helm makes no assumptions about the language of the plugin. You can write it
in whatever you prefer.
- Commands are responsible for implementing specific help text for `-h` and
`--help`. Helm will use `usage` and `description` for `helm help` and `helm
help myplugin`, but will not handle `helm myplugin --help`.
## Downloader Plugins
By default, Helm is able to pull Charts using HTTP/S. As of Helm 2.4.0, plugins
can have a special capability to download Charts from arbitrary sources.
Plugins shall declare this special capability in the `plugin.yaml` file (top
level):
```yaml
downloaders:
- command: "bin/mydownloader"
protocols:
- "myprotocol"
- "myprotocols"
```
If such plugin is installed, Helm can interact with the repository using the
specified protocol scheme by invoking the `command`. The special repository
shall be added similarly to the regular ones: `helm repo add favorite
myprotocol://example.com/` The rules for the special repos are the same to the
regular ones: Helm must be able to download the `index.yaml` file in order to
discover and cache the list of available Charts.
The defined command will be invoked with the following scheme: `command certFile
keyFile caFile full-URL`. The SSL credentials are coming from the repo
definition, stored in `$HELM_REPOSITORY_CONFIG`
(i.e., `$HELM_CONFIG_HOME/repositories.yaml`). A Downloader plugin
is expected to dump the raw content to stdout and report errors on stderr.
The downloader command also supports sub-commands or arguments, allowing you to
specify for example `bin/mydownloader subcommand -d` in the `plugin.yaml`. This
is useful if you want to use the same executable for the main plugin command and
the downloader command, but with a different sub-command for each.
## Environment Variables
When Helm executes a plugin, it passes the outer environment to the plugin, and
also injects some additional environment variables.
Variables like `KUBECONFIG` are set for the plugin if they are set in the outer
environment.
The following variables are guaranteed to be set:
- `HELM_PLUGINS`: The path to the plugins directory.
- `HELM_PLUGIN_NAME`: The name of the plugin, as invoked by `helm`. So `helm
myplug` will have the short name `myplug`.
- `HELM_PLUGIN_DIR`: The directory that contains the plugin.
- `HELM_BIN`: The path to the `helm` command (as executed by the user).
- `HELM_DEBUG`: Indicates if the debug flag was set by helm.
- `HELM_REGISTRY_CONFIG`: The location for the registry configuration (if
using). Note that the use of Helm with registries is an experimental feature.
- `HELM_REPOSITORY_CACHE`: The path to the repository cache files.
- `HELM_REPOSITORY_CONFIG`: The path to the repository configuration file.
- `HELM_NAMESPACE`: The namespace given to the `helm` command (generally using
the `-n` flag).
- `HELM_KUBECONTEXT`: The name of the Kubernetes config context given to the
`helm` command.
Additionally, if a Kubernetes configuration file was explicitly specified, it
will be set as the `KUBECONFIG` variable
## A Note on Flag Parsing
When executing a plugin, Helm will parse global flags for its own use. None of
these flags are passed on to the plugin.
- `--debug`: If this is specified, `$HELM_DEBUG` is set to `1`
- `--registry-config`: This is converted to `$HELM_REGISTRY_CONFIG`
- `--repository-cache`: This is converted to `$HELM_REPOSITORY_CACHE`
- `--repository-config`: This is converted to `$HELM_REPOSITORY_CONFIG`
- `--namespace` and `-n`: This is converted to `$HELM_NAMESPACE`
- `--kube-context`: This is converted to `$HELM_KUBECONTEXT`
- `--kubeconfig`: This is converted to `$KUBECONFIG`
Plugins _should_ display help text and then exit for `-h` and `--help`. In all
other cases, plugins may use flags as appropriate.
## Providing shell auto-completion
As of Helm 3.2, a plugin can optionally provide support for shell
auto-completion as part of Helm's existing auto-completion mechanism.
### Static auto-completion
If a plugin provides its own flags and/or sub-commands, it can inform Helm of
them by having a `completion.yaml` file located in the plugin's root directory.
The `completion.yaml` file has the form:
```yaml
name: <pluginName>
flags:
- <flag 1>
- <flag 2>
validArgs:
- <arg value 1>
- <arg value 2>
commands:
name: <commandName>
flags:
- <flag 1>
- <flag 2>
validArgs:
- <arg value 1>
- <arg value 2>
commands:
<and so on, recursively>
```
Notes:
1. All sections are optional but should be provided if applicable.
1. Flags should not include the `-` or `--` prefix.
1. Both short and long flags can and should be specified. A short flag need not
be associated with its corresponding long form, but both forms should be
listed.
1. Flags need not be ordered in any way, but need to be listed at the correct
point in the sub-command hierarchy of the file.
1. Helm's existing global flags are already handled by Helm's auto-completion
mechanism, therefore plugins need not specify the following flags `--debug`,
`--namespace` or `-n`, `--kube-context`, and `--kubeconfig`, or any other
global flag.
1. The `validArgs` list provides a static list of possible completions for the
first parameter following a sub-command. It is not always possible to
provide such a list in advance (see the [Dynamic
Completion](#dynamic-completion) section below), in which case the
`validArgs` section can be omitted.
The `completion.yaml` file is entirely optional. If it is not provided, Helm
will simply not provide shell auto-completion for the plugin (unless [Dynamic
Completion](#dynamic-completion) is supported by the plugin). Also, adding a
`completion.yaml` file is backwards-compatible and will not impact the behavior
of the plugin when using older helm versions.
As an example, for the [`fullstatus
plugin`](https://github.com/marckhouzam/helm-fullstatus) which has no
sub-commands but accepts the same flags as the `helm status` command, the
`completion.yaml` file is:
```yaml
name: fullstatus
flags:
- o
- output
- revision
```
A more intricate example for the [`2to3
plugin`](https://github.com/helm/helm-2to3), has a `completion.yaml` file of:
```yaml
name: 2to3
commands:
- name: cleanup
flags:
- config-cleanup
- dry-run
- l
- label
- release-cleanup
- s
- release-storage
- tiller-cleanup
- t
- tiller-ns
- tiller-out-cluster
- name: convert
flags:
- delete-v2-releases
- dry-run
- l
- label
- s
- release-storage
- release-versions-max
- t
- tiller-ns
- tiller-out-cluster
- name: move
commands:
- name: config
flags:
- dry-run
```
### Dynamic completion
Also starting with Helm 3.2, plugins can provide their own dynamic shell
auto-completion. Dynamic shell auto-completion is the completion of parameter
values or flag values that cannot be defined in advance. For example,
completion of the names of helm releases currently available on the cluster.
For the plugin to support dynamic auto-completion, it must provide an
**executable** file called `plugin.complete` in its root directory. When the
Helm completion script requires dynamic completions for the plugin, it will
execute the `plugin.complete` file, passing it the command-line that needs to be
completed. The `plugin.complete` executable will need to have the logic to
determine what the proper completion choices are and output them to standard
output to be consumed by the Helm completion script.
The `plugin.complete` file is entirely optional. If it is not provided, Helm
will simply not provide dynamic auto-completion for the plugin. Also, adding a
`plugin.complete` file is backwards-compatible and will not impact the behavior
of the plugin when using older helm versions.
The output of the `plugin.complete` script should be a new-line separated list
such as:
```
rel1
rel2
rel3
```
When `plugin.complete` is called, the plugin environment is set just like when
the plugin's main script is called. Therefore, the variables `$HELM_NAMESPACE`,
`$HELM_KUBECONTEXT`, and all other plugin variables will already be set, and
their corresponding global flags will be removed.
The `plugin.complete` file can be in any executable form; it can be a shell
script, a Go program, or any other type of program that Helm can execute. The
`plugin.complete` file ***must*** have executable permissions for the user. The
`plugin.complete` file ***must*** exit with a success code (value 0).
In some cases, dynamic completion will require to obtain information from the
Kubernetes cluster. For example, the `helm fullstatus` plugin requires a
release name as input. In the `fullstatus` plugin, for its `plugin.complete`
script to provide completion for current release names, it can simply run `helm
list -q` and output the result.
If it is desired to use the same executable for plugin execution and for plugin
completion, the `plugin.complete` script can be made to call the main plugin
executable with some special parameter or flag; when the main plugin executable
detects the special parameter or flag, it will know to run the completion. In
our example, `plugin.complete` could be implemented like this:
```sh
#!/usr/bin/env sh
# "$@" is the entire command-line that requires completion.
# It is important to double-quote the "$@" variable to preserve a possibly empty last parameter.
$HELM_PLUGIN_DIR/status.sh --complete "$@"
```
The `fullstatus` plugin's real script (`status.sh`) must then look for the
`--complete` flag and if found, printout the proper completions.
### Tips and tricks
1. The shell will automatically filter out completion choices that don't match
user input. A plugin can therefore return all relevant completions without
removing the ones that don't match the user input. For example, if the
command-line is `helm fullstatus ngin<TAB>`, the `plugin.complete` script can
print *all* release names (of the `default` namespace), not just the ones
starting with `ngin`; the shell will only retain the ones starting with
`ngin`.
1. To simplify dynamic completion support, especially if you have a complex
plugin, you can have your `plugin.complete` script call your main plugin
script and request completion choices. See the [Dynamic
Completion](#dynamic-completion) section above for an example.
1. To debug dynamic completion and the `plugin.complete` file, one can run the
following to see the completion results :
- `helm __complete <pluginName> <arguments to complete>`. For example:
- `helm __complete fullstatus --output js<ENTER>`,
- `helm __complete fullstatus -o json ""<ENTER>` | helm | title The Helm Plugins Guide description Introduces how to use and create plugins to extend Helm s functionality aliases docs plugins weight 12 A Helm plugin is a tool that can be accessed through the helm CLI but which is not part of the built in Helm codebase Existing plugins can be found on related section or by searching GitHub https github com search q topic 3Ahelm plugin type Repositories This guide explains how to use and create plugins An Overview Helm plugins are add on tools that integrate seamlessly with Helm They provide a way to extend the core feature set of Helm but without requiring every new feature to be written in Go and added to the core tool Helm plugins have the following features They can be added and removed from a Helm installation without impacting the core Helm tool They can be written in any programming language They integrate with Helm and will show up in helm help and other places Helm plugins live in HELM PLUGINS You can find the current value of this including the default value when not set in the environment using the helm env command The Helm plugin model is partially modeled on Git s plugin model To that end you may sometimes hear helm referred to as the porcelain layer with plugins being the plumbing This is a shorthand way of suggesting that Helm provides the user experience and top level processing logic while the plugins do the detail work of performing a desired action Installing a Plugin Plugins are installed using the helm plugin install path url command You can pass in a path to a plugin on your local file system or a url of a remote VCS repo The helm plugin install command clones or copies the plugin at the path url given into HELM PLUGINS console helm plugin install https github com adamreese helm env If you have a plugin tar distribution simply untar the plugin into the HELM PLUGINS directory You can also install tarball plugins directly from url by issuing helm plugin install https domain path to plugin tar gz Testing a locally built Plugin First you need to find your HELM PLUGINS path to do it run the folowing command bash helm env Change your current directory to the director that HELM PLUGINS is set to Now you can add a symbolic link to your build out put of your plugin in this example we did it for mapkubeapis bash ln s GitHub helm mapkubeapis helm mapkubeapis Building Plugins In many ways a plugin is similar to a chart Each plugin has a top level directory and then a plugin yaml file HELM PLUGINS last plugin yaml last sh In the example above the last plugin is contained inside of a directory named last It has two files plugin yaml required and an executable script last sh optional The core of a plugin is a simple YAML file named plugin yaml Here is a plugin YAML for a plugin that helps get the last release name yaml name last version 0 1 0 usage get the last release name description get the last release name ignoreFlags false command HELM BIN host TILLER HOST list short max 1 date r platformCommand os linux arch i386 command HELM BIN list short max 1 date r os linux arch amd64 command HELM BIN list short max 1 date r os windows arch amd64 command HELM BIN list short max 1 date r The name is the name of the plugin When Helm executes this plugin this is the name it will use e g helm NAME will invoke this plugin name should match the directory name In our example above that means the plugin with name last should be contained in a directory named last Restrictions on name name cannot duplicate one of the existing helm top level commands name must be restricted to the characters ASCII a z A Z 0 9 and version is the SemVer 2 version of the plugin usage and description are both used to generate the help text of a command The ignoreFlags switch tells Helm to not pass flags to the plugin So if a plugin is called with helm myplugin foo and ignoreFlags true then foo is silently discarded Finally and most importantly platformCommand or command is the command that this plugin will execute when it is called The platformCommand section defines the OS Architecture specific variations of a command The following rules will apply in deciding which command to use If platformCommand is present it will be searched first If both os and arch match the current platform search will stop and the command will be used If os matches and there is no more specific arch match the command will be used If no platformCommand match is found the default command will be used If no matches are found in platformCommand and no command is present Helm will exit with an error Environment variables are interpolated before the plugin is executed The pattern above illustrates the preferred way to indicate where the plugin program lives There are some strategies for working with plugin commands If a plugin includes an executable the executable for a platformCommand or a command should be packaged in the plugin directory The platformCommand or command line will have any environment variables expanded before execution HELM PLUGIN DIR will point to the plugin directory The command itself is not executed in a shell So you can t oneline a shell script Helm injects lots of configuration into environment variables Take a look at the environment to see what information is available Helm makes no assumptions about the language of the plugin You can write it in whatever you prefer Commands are responsible for implementing specific help text for h and help Helm will use usage and description for helm help and helm help myplugin but will not handle helm myplugin help Downloader Plugins By default Helm is able to pull Charts using HTTP S As of Helm 2 4 0 plugins can have a special capability to download Charts from arbitrary sources Plugins shall declare this special capability in the plugin yaml file top level yaml downloaders command bin mydownloader protocols myprotocol myprotocols If such plugin is installed Helm can interact with the repository using the specified protocol scheme by invoking the command The special repository shall be added similarly to the regular ones helm repo add favorite myprotocol example com The rules for the special repos are the same to the regular ones Helm must be able to download the index yaml file in order to discover and cache the list of available Charts The defined command will be invoked with the following scheme command certFile keyFile caFile full URL The SSL credentials are coming from the repo definition stored in HELM REPOSITORY CONFIG i e HELM CONFIG HOME repositories yaml A Downloader plugin is expected to dump the raw content to stdout and report errors on stderr The downloader command also supports sub commands or arguments allowing you to specify for example bin mydownloader subcommand d in the plugin yaml This is useful if you want to use the same executable for the main plugin command and the downloader command but with a different sub command for each Environment Variables When Helm executes a plugin it passes the outer environment to the plugin and also injects some additional environment variables Variables like KUBECONFIG are set for the plugin if they are set in the outer environment The following variables are guaranteed to be set HELM PLUGINS The path to the plugins directory HELM PLUGIN NAME The name of the plugin as invoked by helm So helm myplug will have the short name myplug HELM PLUGIN DIR The directory that contains the plugin HELM BIN The path to the helm command as executed by the user HELM DEBUG Indicates if the debug flag was set by helm HELM REGISTRY CONFIG The location for the registry configuration if using Note that the use of Helm with registries is an experimental feature HELM REPOSITORY CACHE The path to the repository cache files HELM REPOSITORY CONFIG The path to the repository configuration file HELM NAMESPACE The namespace given to the helm command generally using the n flag HELM KUBECONTEXT The name of the Kubernetes config context given to the helm command Additionally if a Kubernetes configuration file was explicitly specified it will be set as the KUBECONFIG variable A Note on Flag Parsing When executing a plugin Helm will parse global flags for its own use None of these flags are passed on to the plugin debug If this is specified HELM DEBUG is set to 1 registry config This is converted to HELM REGISTRY CONFIG repository cache This is converted to HELM REPOSITORY CACHE repository config This is converted to HELM REPOSITORY CONFIG namespace and n This is converted to HELM NAMESPACE kube context This is converted to HELM KUBECONTEXT kubeconfig This is converted to KUBECONFIG Plugins should display help text and then exit for h and help In all other cases plugins may use flags as appropriate Providing shell auto completion As of Helm 3 2 a plugin can optionally provide support for shell auto completion as part of Helm s existing auto completion mechanism Static auto completion If a plugin provides its own flags and or sub commands it can inform Helm of them by having a completion yaml file located in the plugin s root directory The completion yaml file has the form yaml name pluginName flags flag 1 flag 2 validArgs arg value 1 arg value 2 commands name commandName flags flag 1 flag 2 validArgs arg value 1 arg value 2 commands and so on recursively Notes 1 All sections are optional but should be provided if applicable 1 Flags should not include the or prefix 1 Both short and long flags can and should be specified A short flag need not be associated with its corresponding long form but both forms should be listed 1 Flags need not be ordered in any way but need to be listed at the correct point in the sub command hierarchy of the file 1 Helm s existing global flags are already handled by Helm s auto completion mechanism therefore plugins need not specify the following flags debug namespace or n kube context and kubeconfig or any other global flag 1 The validArgs list provides a static list of possible completions for the first parameter following a sub command It is not always possible to provide such a list in advance see the Dynamic Completion dynamic completion section below in which case the validArgs section can be omitted The completion yaml file is entirely optional If it is not provided Helm will simply not provide shell auto completion for the plugin unless Dynamic Completion dynamic completion is supported by the plugin Also adding a completion yaml file is backwards compatible and will not impact the behavior of the plugin when using older helm versions As an example for the fullstatus plugin https github com marckhouzam helm fullstatus which has no sub commands but accepts the same flags as the helm status command the completion yaml file is yaml name fullstatus flags o output revision A more intricate example for the 2to3 plugin https github com helm helm 2to3 has a completion yaml file of yaml name 2to3 commands name cleanup flags config cleanup dry run l label release cleanup s release storage tiller cleanup t tiller ns tiller out cluster name convert flags delete v2 releases dry run l label s release storage release versions max t tiller ns tiller out cluster name move commands name config flags dry run Dynamic completion Also starting with Helm 3 2 plugins can provide their own dynamic shell auto completion Dynamic shell auto completion is the completion of parameter values or flag values that cannot be defined in advance For example completion of the names of helm releases currently available on the cluster For the plugin to support dynamic auto completion it must provide an executable file called plugin complete in its root directory When the Helm completion script requires dynamic completions for the plugin it will execute the plugin complete file passing it the command line that needs to be completed The plugin complete executable will need to have the logic to determine what the proper completion choices are and output them to standard output to be consumed by the Helm completion script The plugin complete file is entirely optional If it is not provided Helm will simply not provide dynamic auto completion for the plugin Also adding a plugin complete file is backwards compatible and will not impact the behavior of the plugin when using older helm versions The output of the plugin complete script should be a new line separated list such as rel1 rel2 rel3 When plugin complete is called the plugin environment is set just like when the plugin s main script is called Therefore the variables HELM NAMESPACE HELM KUBECONTEXT and all other plugin variables will already be set and their corresponding global flags will be removed The plugin complete file can be in any executable form it can be a shell script a Go program or any other type of program that Helm can execute The plugin complete file must have executable permissions for the user The plugin complete file must exit with a success code value 0 In some cases dynamic completion will require to obtain information from the Kubernetes cluster For example the helm fullstatus plugin requires a release name as input In the fullstatus plugin for its plugin complete script to provide completion for current release names it can simply run helm list q and output the result If it is desired to use the same executable for plugin execution and for plugin completion the plugin complete script can be made to call the main plugin executable with some special parameter or flag when the main plugin executable detects the special parameter or flag it will know to run the completion In our example plugin complete could be implemented like this sh usr bin env sh is the entire command line that requires completion It is important to double quote the variable to preserve a possibly empty last parameter HELM PLUGIN DIR status sh complete The fullstatus plugin s real script status sh must then look for the complete flag and if found printout the proper completions Tips and tricks 1 The shell will automatically filter out completion choices that don t match user input A plugin can therefore return all relevant completions without removing the ones that don t match the user input For example if the command line is helm fullstatus ngin TAB the plugin complete script can print all release names of the default namespace not just the ones starting with ngin the shell will only retain the ones starting with ngin 1 To simplify dynamic completion support especially if you have a complex plugin you can have your plugin complete script call your main plugin script and request completion choices See the Dynamic Completion dynamic completion section above for an example 1 To debug dynamic completion and the plugin complete file one can run the following to see the completion results helm complete pluginName arguments to complete For example helm complete fullstatus output js ENTER helm complete fullstatus o json ENTER |
helm title Role based Access Control Explains how Helm interacts with Kubernetes Role Based Access Control weight 11 scope that you have specified Read more about service account permissions in account is a best practice to ensure that your application is operating in the In Kubernetes granting roles to a user or an application specific service aliases docs rbac | ---
title: "Role-based Access Control"
description: "Explains how Helm interacts with Kubernetes' Role-Based Access Control."
aliases: ["/docs/rbac/"]
weight: 11
---
In Kubernetes, granting roles to a user or an application-specific service
account is a best practice to ensure that your application is operating in the
scope that you have specified. Read more about service account permissions [in
the official Kubernetes
docs](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions).
From Kubernetes 1.6 onwards, Role-based Access Control is enabled by default.
RBAC allows you to specify which types of actions are permitted depending on the
user and their role in your organization.
With RBAC, you can
- grant privileged operations (creating cluster-wide resources, like new roles)
to administrators
- limit a user's ability to create resources (pods, persistent volumes,
deployments) to specific namespaces, or in cluster-wide scopes (resource
quotas, roles, custom resource definitions)
- limit a user's ability to view resources either in specific namespaces or at a
cluster-wide scope.
This guide is for administrators who want to restrict the scope of a user's
interaction with the Kubernetes API.
## Managing user accounts
All Kubernetes clusters have two categories of users: service accounts managed
by Kubernetes, and normal users.
Normal users are assumed to be managed by an outside, independent service. An
administrator distributing private keys, a user store like Keystone or Google
Accounts, even a file with a list of usernames and passwords. In this regard,
Kubernetes does not have objects which represent normal user accounts. Normal
users cannot be added to a cluster through an API call.
In contrast, service accounts are users managed by the Kubernetes API. They are
bound to specific namespaces, and created automatically by the API server or
manually through API calls. Service accounts are tied to a set of credentials
stored as Secrets, which are mounted into pods allowing in-cluster processes to
talk to the Kubernetes API.
API requests are tied to either a normal user or a service account, or are
treated as anonymous requests. This means every process inside or outside the
cluster, from a human user typing `kubectl` on a workstation, to kubelets on
nodes, to members of the control plane, must authenticate when making requests
to the API server, or be treated as an anonymous user.
## Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings
In Kubernetes, user accounts and service accounts can only view and edit
resources they have been granted access to. This access is granted through the
use of Roles and RoleBindings. Roles and RoleBindings are bound to a particular
namespace, which grant users the ability to view and/or edit resources within
that namespace the Role provides them access to.
At a cluster scope, these are called ClusterRoles and ClusterRoleBindings.
Granting a user a ClusterRole grants them access to view and/or edit resources
across the entire cluster. It is also required to view and/or edit resources at
the cluster scope (namespaces, resource quotas, nodes).
ClusterRoles can be bound to a particular namespace through reference in a
RoleBinding. The `admin`, `edit` and `view` default ClusterRoles are commonly
used in this manner.
These are a few ClusterRoles available by default in Kubernetes. They are
intended to be user-facing roles. They include super-user roles
(`cluster-admin`), and roles with more granular access (`admin`, `edit`,
`view`).
| Default ClusterRole | Default ClusterRoleBinding | Description
|---------------------|----------------------------|-------------
| `cluster-admin` | `system:masters` group | Allows super-user access to perform any action on any resource. When used in a ClusterRoleBinding, it gives full control over every resource in the cluster and in all namespaces. When used in a RoleBinding, it gives full control over every resource in the rolebinding's namespace, including the namespace itself.
| `admin` | None | Allows admin access, intended to be granted within a namespace using a RoleBinding. If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and rolebindings within the namespace. It does not allow write access to resource quota or to the namespace itself.
| `edit` | None | Allows read/write access to most objects in a namespace. It does not allow viewing or modifying roles or rolebindings.
| `view` | None | Allows read-only access to see most objects in a namespace. It does not allow viewing roles or rolebindings. It does not allow viewing secrets, since those are escalating.
## Restricting a user account's access using RBAC
Now that we understand the basics of Role-based Access Control, let's discuss
how an administrator can restrict a user's scope of access.
### Example: Grant a user read/write access to a particular namespace
To restrict a user's access to a particular namespace, we can use either the
`edit` or the `admin` role. If your charts create or interact with Roles and
Rolebindings, you'll want to use the `admin` ClusterRole.
Additionally, you may also create a RoleBinding with `cluster-admin` access.
Granting a user `cluster-admin` access at the namespace scope provides full
control over every resource in the namespace, including the namespace itself.
For this example, we will create a user with the `edit` Role. First, create the
namespace:
```console
$ kubectl create namespace foo
```
Now, create a RoleBinding in that namespace, granting the user the `edit` role.
```console
$ kubectl create rolebinding sam-edit
--clusterrole edit \
--user sam \
--namespace foo
```
### Example: Grant a user read/write access at the cluster scope
If a user wishes to install a chart that installs cluster-scope resources
(namespaces, roles, custom resource definitions, etc.), they will require
cluster-scope write access.
To do that, grant the user either `admin` or `cluster-admin` access.
Granting a user `cluster-admin` access grants them access to absolutely every
resource available in Kubernetes, including node access with `kubectl drain` and
other administrative tasks. It is highly recommended to consider providing the
user `admin` access instead, or to create a custom ClusterRole tailored to their
needs.
```console
$ kubectl create clusterrolebinding sam-view
--clusterrole view \
--user sam
$ kubectl create clusterrolebinding sam-secret-reader
--clusterrole secret-reader \
--user sam
```
### Example: Grant a user read-only access to a particular namespace
You might've noticed that there is no ClusterRole available for viewing secrets.
The `view` ClusterRole does not grant a user read access to Secrets due to
escalation concerns. Helm stores release metadata as Secrets by default.
In order for a user to run `helm list`, they need to be able to read these
secrets. For that, we will create a special `secret-reader` ClusterRole.
Create the file `cluster-role-secret-reader.yaml` and write the following
content into the file:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
```
Then, create the ClusterRole using
```console
$ kubectl create -f clusterrole-secret-reader.yaml
```
Once that's done, we can grant a user read access to most resources, and then
grant them read access to secrets:
```console
$ kubectl create namespace foo
$ kubectl create rolebinding sam-view
--clusterrole view \
--user sam \
--namespace foo
$ kubectl create rolebinding sam-secret-reader
--clusterrole secret-reader \
--user sam \
--namespace foo
```
### Example: Grant a user read-only access at the cluster scope
In certain scenarios, it may be beneficial to grant a user cluster-scope access.
For example, if a user wants to run the command `helm list --all-namespaces`,
the API requires the user has cluster-scope read access.
To do that, grant the user both `view` and `secret-reader` access as described
above, but with a ClusterRoleBinding.
```console
$ kubectl create clusterrolebinding sam-view
--clusterrole view \
--user sam
$ kubectl create clusterrolebinding sam-secret-reader
--clusterrole secret-reader \
--user sam
```
## Additional Thoughts
The examples shown above utilize the default ClusterRoles provided with
Kubernetes. For more fine-grained control over what resources users are granted
access to, have a look at [the Kubernetes
documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) on
creating your own custom Roles and ClusterRoles. | helm | title Role based Access Control description Explains how Helm interacts with Kubernetes Role Based Access Control aliases docs rbac weight 11 In Kubernetes granting roles to a user or an application specific service account is a best practice to ensure that your application is operating in the scope that you have specified Read more about service account permissions in the official Kubernetes docs https kubernetes io docs reference access authn authz rbac service account permissions From Kubernetes 1 6 onwards Role based Access Control is enabled by default RBAC allows you to specify which types of actions are permitted depending on the user and their role in your organization With RBAC you can grant privileged operations creating cluster wide resources like new roles to administrators limit a user s ability to create resources pods persistent volumes deployments to specific namespaces or in cluster wide scopes resource quotas roles custom resource definitions limit a user s ability to view resources either in specific namespaces or at a cluster wide scope This guide is for administrators who want to restrict the scope of a user s interaction with the Kubernetes API Managing user accounts All Kubernetes clusters have two categories of users service accounts managed by Kubernetes and normal users Normal users are assumed to be managed by an outside independent service An administrator distributing private keys a user store like Keystone or Google Accounts even a file with a list of usernames and passwords In this regard Kubernetes does not have objects which represent normal user accounts Normal users cannot be added to a cluster through an API call In contrast service accounts are users managed by the Kubernetes API They are bound to specific namespaces and created automatically by the API server or manually through API calls Service accounts are tied to a set of credentials stored as Secrets which are mounted into pods allowing in cluster processes to talk to the Kubernetes API API requests are tied to either a normal user or a service account or are treated as anonymous requests This means every process inside or outside the cluster from a human user typing kubectl on a workstation to kubelets on nodes to members of the control plane must authenticate when making requests to the API server or be treated as an anonymous user Roles ClusterRoles RoleBindings and ClusterRoleBindings In Kubernetes user accounts and service accounts can only view and edit resources they have been granted access to This access is granted through the use of Roles and RoleBindings Roles and RoleBindings are bound to a particular namespace which grant users the ability to view and or edit resources within that namespace the Role provides them access to At a cluster scope these are called ClusterRoles and ClusterRoleBindings Granting a user a ClusterRole grants them access to view and or edit resources across the entire cluster It is also required to view and or edit resources at the cluster scope namespaces resource quotas nodes ClusterRoles can be bound to a particular namespace through reference in a RoleBinding The admin edit and view default ClusterRoles are commonly used in this manner These are a few ClusterRoles available by default in Kubernetes They are intended to be user facing roles They include super user roles cluster admin and roles with more granular access admin edit view Default ClusterRole Default ClusterRoleBinding Description cluster admin system masters group Allows super user access to perform any action on any resource When used in a ClusterRoleBinding it gives full control over every resource in the cluster and in all namespaces When used in a RoleBinding it gives full control over every resource in the rolebinding s namespace including the namespace itself admin None Allows admin access intended to be granted within a namespace using a RoleBinding If used in a RoleBinding allows read write access to most resources in a namespace including the ability to create roles and rolebindings within the namespace It does not allow write access to resource quota or to the namespace itself edit None Allows read write access to most objects in a namespace It does not allow viewing or modifying roles or rolebindings view None Allows read only access to see most objects in a namespace It does not allow viewing roles or rolebindings It does not allow viewing secrets since those are escalating Restricting a user account s access using RBAC Now that we understand the basics of Role based Access Control let s discuss how an administrator can restrict a user s scope of access Example Grant a user read write access to a particular namespace To restrict a user s access to a particular namespace we can use either the edit or the admin role If your charts create or interact with Roles and Rolebindings you ll want to use the admin ClusterRole Additionally you may also create a RoleBinding with cluster admin access Granting a user cluster admin access at the namespace scope provides full control over every resource in the namespace including the namespace itself For this example we will create a user with the edit Role First create the namespace console kubectl create namespace foo Now create a RoleBinding in that namespace granting the user the edit role console kubectl create rolebinding sam edit clusterrole edit user sam namespace foo Example Grant a user read write access at the cluster scope If a user wishes to install a chart that installs cluster scope resources namespaces roles custom resource definitions etc they will require cluster scope write access To do that grant the user either admin or cluster admin access Granting a user cluster admin access grants them access to absolutely every resource available in Kubernetes including node access with kubectl drain and other administrative tasks It is highly recommended to consider providing the user admin access instead or to create a custom ClusterRole tailored to their needs console kubectl create clusterrolebinding sam view clusterrole view user sam kubectl create clusterrolebinding sam secret reader clusterrole secret reader user sam Example Grant a user read only access to a particular namespace You might ve noticed that there is no ClusterRole available for viewing secrets The view ClusterRole does not grant a user read access to Secrets due to escalation concerns Helm stores release metadata as Secrets by default In order for a user to run helm list they need to be able to read these secrets For that we will create a special secret reader ClusterRole Create the file cluster role secret reader yaml and write the following content into the file yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name secret reader rules apiGroups resources secrets verbs get watch list Then create the ClusterRole using console kubectl create f clusterrole secret reader yaml Once that s done we can grant a user read access to most resources and then grant them read access to secrets console kubectl create namespace foo kubectl create rolebinding sam view clusterrole view user sam namespace foo kubectl create rolebinding sam secret reader clusterrole secret reader user sam namespace foo Example Grant a user read only access at the cluster scope In certain scenarios it may be beneficial to grant a user cluster scope access For example if a user wants to run the command helm list all namespaces the API requires the user has cluster scope read access To do that grant the user both view and secret reader access as described above but with a ClusterRoleBinding console kubectl create clusterrolebinding sam view clusterrole view user sam kubectl create clusterrolebinding sam secret reader clusterrole secret reader user sam Additional Thoughts The examples shown above utilize the default ClusterRoles provided with Kubernetes For more fine grained control over what resources users are granted access to have a look at the Kubernetes documentation https kubernetes io docs reference access authn authz rbac on creating your own custom Roles and ClusterRoles |
helm Explains library charts and examples of usage that defines chart primitives or definitions which can be shared by Helm title Library Charts A library chart is a type of templates in other charts This allows users to share snippets of code that can weight 4 aliases docs librarycharts | ---
title: "Library Charts"
description: "Explains library charts and examples of usage"
aliases: ["docs/library_charts/"]
weight: 4
---
A library chart is a type of [Helm chart]()
that defines chart primitives or definitions which can be shared by Helm
templates in other charts. This allows users to share snippets of code that can
be re-used across charts, avoiding repetition and keeping charts
[DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself).
The library chart was introduced in Helm 3 to formally recognize common or
helper charts that have been used by chart maintainers since Helm 2. By
including it as a chart type, it provides:
- A means to explicitly distinguish between common and application charts
- Logic to prevent installation of a common chart
- No rendering of templates in a common chart which may contain release
artifacts
- Allow for dependent charts to use the importer's context
A chart maintainer can define a common chart as a library chart and now be
confident that Helm will handle the chart in a standard consistent fashion. It
also means that definitions in an application chart can be shared by changing
the chart type.
## Create a Simple Library Chart
As mentioned previously, a library chart is a type of [Helm chart](). This means that you can start off by creating a
scaffold chart:
```console
$ helm create mylibchart
Creating mylibchart
```
You will first remove all the files in `templates` directory as we will create
our own templates definitions in this example.
```console
$ rm -rf mylibchart/templates/*
```
The values file will not be required either.
```console
$ rm -f mylibchart/values.yaml
```
Before we jump into creating common code, lets do a quick review of some
relevant Helm concepts. A [named template]() (sometimes called a partial
or a subtemplate) is simply a template defined inside of a file, and given a
name. In the `templates/` directory, any file that begins with an underscore(_)
is not expected to output a Kubernetes manifest file. So by convention, helper
templates and partials are placed in a `_*.tpl` or `_*.yaml` files.
In this example, we will code a common ConfigMap which creates an empty
ConfigMap resource. We will define the common ConfigMap in file
`mylibchart/templates/_configmap.yaml` as follows:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name:
data: {}
```
The ConfigMap construct is defined in named template `mylibchart.configmap.tpl`.
It is a simple ConfigMap with an empty resource, `data`. Within this file there
is another named template called `mylibchart.configmap`. This named template
includes another named template `mylibchart.util.merge` which will take 2 named
templates as arguments, the template calling `mylibchart.configmap` and
`mylibchart.configmap.tpl`.
The helper function `mylibchart.util.merge` is a named template in
`mylibchart/templates/_util.yaml`. It is a handy util from [The Common Helm
Helper Chart](#the-common-helm-helper-chart) because it merges the 2 templates
and overrides any common parts in both:
```yaml
```
This is important when a chart wants to use common code that it needs to
customize with its configuration.
Finally, lets change the chart type to `library`. This requires editing
`mylibchart/Chart.yaml` as follows:
```yaml
apiVersion: v2
name: mylibchart
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
# type: application
type: library
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application and it is recommended to use it with quotes.
appVersion: "1.16.0"
```
The library chart is now ready to be shared and its ConfigMap definition to be
re-used.
Before moving on, it is worth checking if Helm recognizes the chart as a library
chart:
```console
$ helm install mylibchart mylibchart/
Error: library charts are not installable
```
## Use the Simple Library Chart
It is time to use the library chart. This means creating a scaffold chart again:
```console
$ helm create mychart
Creating mychart
```
Lets clean out the template files again as we want to create a ConfigMap only:
```console
$ rm -rf mychart/templates/*
```
When we want to create a simple ConfigMap in a Helm template, it could look
similar to the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name:
data:
myvalue: "Hello World"
```
We are however going to re-use the common code already created in `mylibchart`.
The ConfigMap can be created in the file `mychart/templates/configmap.yaml` as
follows:
```yaml
data:
myvalue: "Hello World"
```
You can see that it simplifies the work we have to do by inheriting the common
ConfigMap definition which adds standard properties for ConfigMap. In our
template we add the configuration, in this case the data key `myvalue` and its
value. The configuration override the empty resource of the common ConfigMap.
This is feasible because of the helper function `mylibchart.util.merge` we
mentioned in the previous section.
To be able to use the common code, we need to add `mylibchart` as a dependency.
Add the following to the end of the file `mychart/Chart.yaml`:
```yaml
# My common code in my library chart
dependencies:
- name: mylibchart
version: 0.1.0
repository: file://../mylibchart
```
This includes the library chart as a dynamic dependency from the filesystem
which is at the same parent path as our application chart. As we are including
the library chart as a dynamic dependency, we need to run `helm dependency
update`. It will copy the library chart into your `charts/` directory.
```console
$ helm dependency update mychart/
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Deleting outdated charts
```
We are now ready to deploy our chart. Before installing, it is worth checking
the rendered template first.
```console
$ helm install mydemo mychart/ --debug --dry-run
install.go:159: [debug] Original chart version: ""
install.go:176: [debug] CHART PATH: /root/test/helm-charts/mychart
NAME: mydemo
LAST DEPLOYED: Tue Mar 3 17:48:47 2020
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
affinity: {}
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: nginx
imagePullSecrets: []
ingress:
annotations: {}
enabled: false
hosts:
- host: chart-example.local
paths: []
tls: []
mylibchart:
global: {}
nameOverride: ""
nodeSelector: {}
podSecurityContext: {}
replicaCount: 1
resources: {}
securityContext: {}
service:
port: 80
type: ClusterIP
serviceAccount:
annotations: {}
create: true
name: null
tolerations: []
HOOKS:
MANIFEST:
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
data:
myvalue: Hello World
kind: ConfigMap
metadata:
labels:
app: mychart
chart: mychart-0.1.0
release: mydemo
name: mychart-mydemo
```
This looks like the ConfigMap we want with data override of `myvalue: Hello
World`. Lets install it:
```console
$ helm install mydemo mychart/
NAME: mydemo
LAST DEPLOYED: Tue Mar 3 17:52:40 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
```
We can retrieve the release and see that the actual template was loaded.
```console
$ helm get manifest mydemo
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
data:
myvalue: Hello World
kind: ConfigMap
metadata:
labels:
app: mychart
chart: mychart-0.1.0
release: mydemo
name: mychart-mydemo
```
## Library Chart Benefits
Because of their inability to act as standalone charts, library charts can leverage the following functionality:
- The `.Files` object references the file paths on the parent chart, rather than the path local to the library chart
- The `.Values` object is the same as the parent chart, in contrast to application [subcharts]() which receive the section of values configured under their header in the parent.
## The Common Helm Helper Chart
```markdown
Note: The Common Helm Helper Chart repo on Github is no longer actively maintained, and the repo has been deprecated and archived.
```
This [chart](https://github.com/helm/charts/tree/master/incubator/common) was
the original pattern for common charts. It provides utilities that reflect best
practices of Kubernetes chart development. Best of all it can be used off the
bat by you when developing your charts to give you handy shared code.
Here is a quick way to use it. For more details, have a look at the
[README](https://github.com/helm/charts/blob/master/incubator/common/README.md).
Create a scaffold chart again:
```console
$ helm create demo
Creating demo
```
Lets use the common code from the helper chart. First, edit deployment
`demo/templates/deployment.yaml` as follows:
```yaml
## Define overrides for your Deployment resource here, e.g.
apiVersion: apps/v1
spec:
replicas:
selector:
matchLabels:
template:
metadata:
labels:
```
And now the service file, `demo/templates/service.yaml` as follows:
```yaml
## Define overrides for your Service resource here, e.g.
# metadata:
# labels:
# custom: label
# spec:
# ports:
# - port: 8080
```
These templates show how inheriting the common code from the helper chart
simplifies your coding down to your configuration or customization of the
resources.
To be able to use the common code, we need to add `common` as a dependency. Add
the following to the end of the file `demo/Chart.yaml`:
```yaml
dependencies:
- name: common
version: "^0.0.5"
repository: "https://charts.helm.sh/incubator/"
```
Note: You will need to add the `incubator` repo to the Helm repository list
(`helm repo add`).
As we are including the chart as a dynamic dependency, we need to run `helm
dependency update`. It will copy the helper chart into your `charts/` directory.
As helper chart is using some Helm 2 constructs, you will need to add the
following to `demo/values.yaml` to enable the `nginx` image to be loaded as this
was updated in Helm 3 scaffold chart:
```yaml
image:
tag: 1.16.0
```
You can test that the chart templates are correct prior to deploying using the `helm lint` and `helm template` commands.
If it's good to go, deploy away using `helm install`!
| helm | title Library Charts description Explains library charts and examples of usage aliases docs library charts weight 4 A library chart is a type of Helm chart that defines chart primitives or definitions which can be shared by Helm templates in other charts This allows users to share snippets of code that can be re used across charts avoiding repetition and keeping charts DRY https en wikipedia org wiki Don 27t repeat yourself The library chart was introduced in Helm 3 to formally recognize common or helper charts that have been used by chart maintainers since Helm 2 By including it as a chart type it provides A means to explicitly distinguish between common and application charts Logic to prevent installation of a common chart No rendering of templates in a common chart which may contain release artifacts Allow for dependent charts to use the importer s context A chart maintainer can define a common chart as a library chart and now be confident that Helm will handle the chart in a standard consistent fashion It also means that definitions in an application chart can be shared by changing the chart type Create a Simple Library Chart As mentioned previously a library chart is a type of Helm chart This means that you can start off by creating a scaffold chart console helm create mylibchart Creating mylibchart You will first remove all the files in templates directory as we will create our own templates definitions in this example console rm rf mylibchart templates The values file will not be required either console rm f mylibchart values yaml Before we jump into creating common code lets do a quick review of some relevant Helm concepts A named template sometimes called a partial or a subtemplate is simply a template defined inside of a file and given a name In the templates directory any file that begins with an underscore is not expected to output a Kubernetes manifest file So by convention helper templates and partials are placed in a tpl or yaml files In this example we will code a common ConfigMap which creates an empty ConfigMap resource We will define the common ConfigMap in file mylibchart templates configmap yaml as follows yaml apiVersion v1 kind ConfigMap metadata name data The ConfigMap construct is defined in named template mylibchart configmap tpl It is a simple ConfigMap with an empty resource data Within this file there is another named template called mylibchart configmap This named template includes another named template mylibchart util merge which will take 2 named templates as arguments the template calling mylibchart configmap and mylibchart configmap tpl The helper function mylibchart util merge is a named template in mylibchart templates util yaml It is a handy util from The Common Helm Helper Chart the common helm helper chart because it merges the 2 templates and overrides any common parts in both yaml This is important when a chart wants to use common code that it needs to customize with its configuration Finally lets change the chart type to library This requires editing mylibchart Chart yaml as follows yaml apiVersion v2 name mylibchart description A Helm chart for Kubernetes A chart can be either an application or a library chart Application charts are a collection of templates that can be packaged into versioned archives to be deployed Library charts provide useful utilities or functions for the chart developer They re included as a dependency of application charts to inject those utilities and functions into the rendering pipeline Library charts do not define any templates and therefore cannot be deployed type application type library This is the chart version This version number should be incremented each time you make changes to the chart and its templates including the app version version 0 1 0 This is the version number of the application being deployed This version number should be incremented each time you make changes to the application and it is recommended to use it with quotes appVersion 1 16 0 The library chart is now ready to be shared and its ConfigMap definition to be re used Before moving on it is worth checking if Helm recognizes the chart as a library chart console helm install mylibchart mylibchart Error library charts are not installable Use the Simple Library Chart It is time to use the library chart This means creating a scaffold chart again console helm create mychart Creating mychart Lets clean out the template files again as we want to create a ConfigMap only console rm rf mychart templates When we want to create a simple ConfigMap in a Helm template it could look similar to the following yaml apiVersion v1 kind ConfigMap metadata name data myvalue Hello World We are however going to re use the common code already created in mylibchart The ConfigMap can be created in the file mychart templates configmap yaml as follows yaml data myvalue Hello World You can see that it simplifies the work we have to do by inheriting the common ConfigMap definition which adds standard properties for ConfigMap In our template we add the configuration in this case the data key myvalue and its value The configuration override the empty resource of the common ConfigMap This is feasible because of the helper function mylibchart util merge we mentioned in the previous section To be able to use the common code we need to add mylibchart as a dependency Add the following to the end of the file mychart Chart yaml yaml My common code in my library chart dependencies name mylibchart version 0 1 0 repository file mylibchart This includes the library chart as a dynamic dependency from the filesystem which is at the same parent path as our application chart As we are including the library chart as a dynamic dependency we need to run helm dependency update It will copy the library chart into your charts directory console helm dependency update mychart Hang tight while we grab the latest from your chart repositories Successfully got an update from the stable chart repository Update Complete Happy Helming Saving 1 charts Deleting outdated charts We are now ready to deploy our chart Before installing it is worth checking the rendered template first console helm install mydemo mychart debug dry run install go 159 debug Original chart version install go 176 debug CHART PATH root test helm charts mychart NAME mydemo LAST DEPLOYED Tue Mar 3 17 48 47 2020 NAMESPACE default STATUS pending install REVISION 1 TEST SUITE None USER SUPPLIED VALUES COMPUTED VALUES affinity fullnameOverride image pullPolicy IfNotPresent repository nginx imagePullSecrets ingress annotations enabled false hosts host chart example local paths tls mylibchart global nameOverride nodeSelector podSecurityContext replicaCount 1 resources securityContext service port 80 type ClusterIP serviceAccount annotations create true name null tolerations HOOKS MANIFEST Source mychart templates configmap yaml apiVersion v1 data myvalue Hello World kind ConfigMap metadata labels app mychart chart mychart 0 1 0 release mydemo name mychart mydemo This looks like the ConfigMap we want with data override of myvalue Hello World Lets install it console helm install mydemo mychart NAME mydemo LAST DEPLOYED Tue Mar 3 17 52 40 2020 NAMESPACE default STATUS deployed REVISION 1 TEST SUITE None We can retrieve the release and see that the actual template was loaded console helm get manifest mydemo Source mychart templates configmap yaml apiVersion v1 data myvalue Hello World kind ConfigMap metadata labels app mychart chart mychart 0 1 0 release mydemo name mychart mydemo Library Chart Benefits Because of their inability to act as standalone charts library charts can leverage the following functionality The Files object references the file paths on the parent chart rather than the path local to the library chart The Values object is the same as the parent chart in contrast to application subcharts which receive the section of values configured under their header in the parent The Common Helm Helper Chart markdown Note The Common Helm Helper Chart repo on Github is no longer actively maintained and the repo has been deprecated and archived This chart https github com helm charts tree master incubator common was the original pattern for common charts It provides utilities that reflect best practices of Kubernetes chart development Best of all it can be used off the bat by you when developing your charts to give you handy shared code Here is a quick way to use it For more details have a look at the README https github com helm charts blob master incubator common README md Create a scaffold chart again console helm create demo Creating demo Lets use the common code from the helper chart First edit deployment demo templates deployment yaml as follows yaml Define overrides for your Deployment resource here e g apiVersion apps v1 spec replicas selector matchLabels template metadata labels And now the service file demo templates service yaml as follows yaml Define overrides for your Service resource here e g metadata labels custom label spec ports port 8080 These templates show how inheriting the common code from the helper chart simplifies your coding down to your configuration or customization of the resources To be able to use the common code we need to add common as a dependency Add the following to the end of the file demo Chart yaml yaml dependencies name common version 0 0 5 repository https charts helm sh incubator Note You will need to add the incubator repo to the Helm repository list helm repo add As we are including the chart as a dynamic dependency we need to run helm dependency update It will copy the helper chart into your charts directory As helper chart is using some Helm 2 constructs you will need to add the following to demo values yaml to enable the nginx image to be loaded as this was updated in Helm 3 scaffold chart yaml image tag 1 16 0 You can test that the chart templates are correct prior to deploying using the helm lint and helm template commands If it s good to go deploy away using helm install |
ingress nginx Do not move it without providing redirects https github com kubernetes ingress nginx blob main docs kubectl plugin md This file is referenced in code as NOTICE The ingress nginx kubectl plugin | <!--
-----------------NOTICE------------------------
This file is referenced in code as
https://github.com/kubernetes/ingress-nginx/blob/main/docs/kubectl-plugin.md
Do not move it without providing redirects.
-----------------------------------------------
-->
# The ingress-nginx kubectl plugin
## Installation
Install [krew](https://github.com/GoogleContainerTools/krew), then run
```console
kubectl krew install ingress-nginx
```
to install the plugin. Then run
```console
kubectl ingress-nginx --help
```
to make sure the plugin is properly installed and to get a list of commands:
```console
kubectl ingress-nginx --help
A kubectl plugin for inspecting your ingress-nginx deployments
Usage:
ingress-nginx [command]
Available Commands:
backends Inspect the dynamic backend information of an ingress-nginx instance
certs Output the certificate data stored in an ingress-nginx pod
conf Inspect the generated nginx.conf
exec Execute a command inside an ingress-nginx pod
general Inspect the other dynamic ingress-nginx information
help Help about any command
info Show information about the ingress-nginx service
ingresses Provide a short summary of all of the ingress definitions
lint Inspect kubernetes resources for possible issues
logs Get the kubernetes logs for an ingress-nginx pod
ssh ssh into a running ingress-nginx pod
Flags:
--as string Username to impersonate for the operation
--as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--cache-dir string Default HTTP cache directory (default "/Users/alexkursell/.kube/http-cache")
--certificate-authority string Path to a cert file for the certificate authority
--client-certificate string Path to a client certificate file for TLS
--client-key string Path to a client key file for TLS
--cluster string The name of the kubeconfig cluster to use
--context string The name of the kubeconfig context to use
-h, --help help for ingress-nginx
--insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
--kubeconfig string Path to the kubeconfig file to use for CLI requests.
-n, --namespace string If present, the namespace scope for this CLI request
--request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-s, --server string The address and port of the Kubernetes API server
--token string Bearer token for authentication to the API server
--user string The name of the kubeconfig user to use
Use "ingress-nginx [command] --help" for more information about a command.
```
## Common Flags
- Every subcommand supports the basic `kubectl` configuration flags like `--namespace`, `--context`, `--client-key` and so on.
- Subcommands that act on a particular `ingress-nginx` pod (`backends`, `certs`, `conf`, `exec`, `general`, `logs`, `ssh`), support the `--deployment <deployment>`, `--pod <pod>`, and `--container <container>` flags to select either a pod from a deployment with the given name, or a pod with the given name (and the given container name). The `--deployment` flag defaults to `ingress-nginx-controller`, and the `--container` flag defaults to `controller`.
- Subcommands that inspect resources (`ingresses`, `lint`) support the `--all-namespaces` flag, which causes them to inspect resources in every namespace.
## Subcommands
Note that `backends`, `general`, `certs`, and `conf` require `ingress-nginx` version `0.23.0` or higher.
### backends
Run `kubectl ingress-nginx backends` to get a JSON array of the backends that an ingress-nginx controller currently knows about:
```console
$ kubectl ingress-nginx backends -n ingress-nginx
[
{
"name": "default-apple-service-5678",
"service": {
"metadata": {
"creationTimestamp": null
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 5678,
"targetPort": 5678
}
],
"selector": {
"app": "apple"
},
"clusterIP": "10.97.230.121",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
},
"port": 0,
"sslPassthrough": false,
"endpoints": [
{
"address": "10.1.3.86",
"port": "5678"
}
],
"sessionAffinityConfig": {
"name": "",
"cookieSessionAffinity": {
"name": ""
}
},
"upstreamHashByConfig": {
"upstream-hash-by-subset-size": 3
},
"noServer": false,
"trafficShapingPolicy": {
"weight": 0,
"header": "",
"headerValue": "",
"cookie": ""
}
},
{
"name": "default-echo-service-8080",
...
},
{
"name": "upstream-default-backend",
...
}
]
```
Add the `--list` option to show only the backend names. Add the `--backend <backend>` option to show only the backend with the given name.
### certs
Use `kubectl ingress-nginx certs --host <hostname>` to dump the SSL cert/key information for a given host.
**WARNING:** This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere.
```console
$ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
<REDACTED! DO NOT SHARE THIS!>
-----END RSA PRIVATE KEY-----
```
### conf
Use `kubectl ingress-nginx conf` to dump the generated `nginx.conf` file. Add the `--host <hostname>` option to view only the server block for that host:
```console
kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local
server {
server_name testaddr.local ;
listen 80;
set $proxy_upstream_name "-";
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
location / {
set $namespace "";
set $ingress_name "";
set $service_name "";
set $service_port "0";
set $location_path "/";
...
```
### exec
`kubectl ingress-nginx exec` is exactly the same as `kubectl exec`, with the same command flags. It will automatically choose an `ingress-nginx` pod to run the command in.
```console
$ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx
fastcgi_params
geoip
lua
mime.types
modsecurity
modules
nginx.conf
opentracing.json
opentelemetry.toml
owasp-modsecurity-crs
template
```
### info
Shows the internal and external IP/CNAMES for an `ingress-nginx` service.
```console
$ kubectl ingress-nginx info -n ingress-nginx
Service cluster IP address: 10.187.253.31
LoadBalancer IP|CNAME: 35.123.123.123
```
Use the `--service <service>` flag if your `ingress-nginx` `LoadBalancer` service is not named `ingress-nginx`.
### ingresses
`kubectl ingress-nginx ingresses`, alternately `kubectl ingress-nginx ing`, shows a more detailed view of the ingress definitions in a namespace.
Compare:
```console
$ kubectl get ingresses --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
default example-ingress1 testaddr.local,testaddr2.local localhost 80 5d
default test-ingress-2 * localhost 80 5d
```
vs.
```console
$ kubectl ingress-nginx ingresses --all-namespaces
NAMESPACE INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS
default example-ingress1 testaddr.local/etameta localhost NO pear-service 5678 5
default example-ingress1 testaddr2.local/otherpath localhost NO apple-service 5678 1
default example-ingress1 testaddr2.local/otherotherpath localhost NO pear-service 5678 5
default test-ingress-2 * localhost NO echo-service 8080 2
```
### lint
`kubectl ingress-nginx lint` can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between `ingress-nginx` versions.
```console
$ kubectl ingress-nginx lint --all-namespaces --verbose
Checking ingresses...
✗ anamespace/this-nginx
- Contains the removed session-cookie-hash annotation.
Lint added for version 0.24.0
https://github.com/kubernetes/ingress-nginx/issues/3743
✗ othernamespace/ingress-definition-blah
- The rewrite-target annotation value does not reference a capture group
Lint added for version 0.22.0
https://github.com/kubernetes/ingress-nginx/issues/3174
Checking deployments...
✗ namespace2/ingress-nginx-controller
- Uses removed config flag --sort-backends
Lint added for version 0.22.0
https://github.com/kubernetes/ingress-nginx/issues/3655
- Uses removed config flag --enable-dynamic-certificates
Lint added for version 0.24.0
https://github.com/kubernetes/ingress-nginx/issues/3808
```
To show the lints added **only** for a particular `ingress-nginx` release, use the `--from-version` and `--to-version` flags:
```console
$ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0.24.0 --to-version 0.24.0
Checking ingresses...
✗ anamespace/this-nginx
- Contains the removed session-cookie-hash annotation.
Lint added for version 0.24.0
https://github.com/kubernetes/ingress-nginx/issues/3743
Checking deployments...
✗ namespace2/ingress-nginx-controller
- Uses removed config flag --enable-dynamic-certificates
Lint added for version 0.24.0
https://github.com/kubernetes/ingress-nginx/issues/3808
```
### logs
`kubectl ingress-nginx logs` is almost the same as `kubectl logs`, with fewer flags. It will automatically choose an `ingress-nginx` pod to read logs from.
```console
$ kubectl ingress-nginx logs -n ingress-nginx
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: dev
Build: git-48dc3a867
Repository: [email protected]:kubernetes/ingress-nginx.git
-------------------------------------------------------------------------------
W0405 16:53:46.061589 7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: nginx/1.15.9
W0405 16:53:46.070093 7 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0405 16:53:46.070499 7 main.go:205] Creating API client for https://10.96.0.1:443
I0405 16:53:46.077784 7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64
I0405 16:53:46.183359 7 nginx.go:265] Starting NGINX Ingress controller
I0405 16:53:46.193913 7 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"82258915-563e-11e9-9c52-025000000001", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
...
```
### ssh
`kubectl ingress-nginx ssh` is exactly the same as `kubectl ingress-nginx exec -it -- /bin/bash`. Use it when you want to quickly be dropped into a shell inside a running `ingress-nginx` container.
```console
$ kubectl ingress-nginx ssh -n ingress-nginx
www-data@ingress-nginx-controller-7cbf77c976-wx5pn:/etc/nginx$
``` | ingress nginx | NOTICE This file is referenced in code as https github com kubernetes ingress nginx blob main docs kubectl plugin md Do not move it without providing redirects The ingress nginx kubectl plugin Installation Install krew https github com GoogleContainerTools krew then run console kubectl krew install ingress nginx to install the plugin Then run console kubectl ingress nginx help to make sure the plugin is properly installed and to get a list of commands console kubectl ingress nginx help A kubectl plugin for inspecting your ingress nginx deployments Usage ingress nginx command Available Commands backends Inspect the dynamic backend information of an ingress nginx instance certs Output the certificate data stored in an ingress nginx pod conf Inspect the generated nginx conf exec Execute a command inside an ingress nginx pod general Inspect the other dynamic ingress nginx information help Help about any command info Show information about the ingress nginx service ingresses Provide a short summary of all of the ingress definitions lint Inspect kubernetes resources for possible issues logs Get the kubernetes logs for an ingress nginx pod ssh ssh into a running ingress nginx pod Flags as string Username to impersonate for the operation as group stringArray Group to impersonate for the operation this flag can be repeated to specify multiple groups cache dir string Default HTTP cache directory default Users alexkursell kube http cache certificate authority string Path to a cert file for the certificate authority client certificate string Path to a client certificate file for TLS client key string Path to a client key file for TLS cluster string The name of the kubeconfig cluster to use context string The name of the kubeconfig context to use h help help for ingress nginx insecure skip tls verify If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure kubeconfig string Path to the kubeconfig file to use for CLI requests n namespace string If present the namespace scope for this CLI request request timeout string The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests default 0 s server string The address and port of the Kubernetes API server token string Bearer token for authentication to the API server user string The name of the kubeconfig user to use Use ingress nginx command help for more information about a command Common Flags Every subcommand supports the basic kubectl configuration flags like namespace context client key and so on Subcommands that act on a particular ingress nginx pod backends certs conf exec general logs ssh support the deployment deployment pod pod and container container flags to select either a pod from a deployment with the given name or a pod with the given name and the given container name The deployment flag defaults to ingress nginx controller and the container flag defaults to controller Subcommands that inspect resources ingresses lint support the all namespaces flag which causes them to inspect resources in every namespace Subcommands Note that backends general certs and conf require ingress nginx version 0 23 0 or higher backends Run kubectl ingress nginx backends to get a JSON array of the backends that an ingress nginx controller currently knows about console kubectl ingress nginx backends n ingress nginx name default apple service 5678 service metadata creationTimestamp null spec ports protocol TCP port 5678 targetPort 5678 selector app apple clusterIP 10 97 230 121 type ClusterIP sessionAffinity None status loadBalancer port 0 sslPassthrough false endpoints address 10 1 3 86 port 5678 sessionAffinityConfig name cookieSessionAffinity name upstreamHashByConfig upstream hash by subset size 3 noServer false trafficShapingPolicy weight 0 header headerValue cookie name default echo service 8080 name upstream default backend Add the list option to show only the backend names Add the backend backend option to show only the backend with the given name certs Use kubectl ingress nginx certs host hostname to dump the SSL cert key information for a given host WARNING This command will dump sensitive private key information Don t blindly share the output and certainly don t log it anywhere console kubectl ingress nginx certs n ingress nginx host testaddr local BEGIN CERTIFICATE END CERTIFICATE BEGIN CERTIFICATE END CERTIFICATE BEGIN RSA PRIVATE KEY REDACTED DO NOT SHARE THIS END RSA PRIVATE KEY conf Use kubectl ingress nginx conf to dump the generated nginx conf file Add the host hostname option to view only the server block for that host console kubectl ingress nginx conf n ingress nginx host testaddr local server server name testaddr local listen 80 set proxy upstream name set pass access scheme scheme set pass server port server port set best http host http host set pass port pass server port location set namespace set ingress name set service name set service port 0 set location path exec kubectl ingress nginx exec is exactly the same as kubectl exec with the same command flags It will automatically choose an ingress nginx pod to run the command in console kubectl ingress nginx exec i n ingress nginx ls etc nginx fastcgi params geoip lua mime types modsecurity modules nginx conf opentracing json opentelemetry toml owasp modsecurity crs template info Shows the internal and external IP CNAMES for an ingress nginx service console kubectl ingress nginx info n ingress nginx Service cluster IP address 10 187 253 31 LoadBalancer IP CNAME 35 123 123 123 Use the service service flag if your ingress nginx LoadBalancer service is not named ingress nginx ingresses kubectl ingress nginx ingresses alternately kubectl ingress nginx ing shows a more detailed view of the ingress definitions in a namespace Compare console kubectl get ingresses all namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE default example ingress1 testaddr local testaddr2 local localhost 80 5d default test ingress 2 localhost 80 5d vs console kubectl ingress nginx ingresses all namespaces NAMESPACE INGRESS NAME HOST PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS default example ingress1 testaddr local etameta localhost NO pear service 5678 5 default example ingress1 testaddr2 local otherpath localhost NO apple service 5678 1 default example ingress1 testaddr2 local otherotherpath localhost NO pear service 5678 5 default test ingress 2 localhost NO echo service 8080 2 lint kubectl ingress nginx lint can check a namespace or entire cluster for potential configuration issues This command is especially useful when upgrading between ingress nginx versions console kubectl ingress nginx lint all namespaces verbose Checking ingresses anamespace this nginx Contains the removed session cookie hash annotation Lint added for version 0 24 0 https github com kubernetes ingress nginx issues 3743 othernamespace ingress definition blah The rewrite target annotation value does not reference a capture group Lint added for version 0 22 0 https github com kubernetes ingress nginx issues 3174 Checking deployments namespace2 ingress nginx controller Uses removed config flag sort backends Lint added for version 0 22 0 https github com kubernetes ingress nginx issues 3655 Uses removed config flag enable dynamic certificates Lint added for version 0 24 0 https github com kubernetes ingress nginx issues 3808 To show the lints added only for a particular ingress nginx release use the from version and to version flags console kubectl ingress nginx lint all namespaces verbose from version 0 24 0 to version 0 24 0 Checking ingresses anamespace this nginx Contains the removed session cookie hash annotation Lint added for version 0 24 0 https github com kubernetes ingress nginx issues 3743 Checking deployments namespace2 ingress nginx controller Uses removed config flag enable dynamic certificates Lint added for version 0 24 0 https github com kubernetes ingress nginx issues 3808 logs kubectl ingress nginx logs is almost the same as kubectl logs with fewer flags It will automatically choose an ingress nginx pod to read logs from console kubectl ingress nginx logs n ingress nginx NGINX Ingress controller Release dev Build git 48dc3a867 Repository git github com kubernetes ingress nginx git W0405 16 53 46 061589 7 flags go 214 SSL certificate chain completion is disabled enable ssl chain completion false nginx version nginx 1 15 9 W0405 16 53 46 070093 7 client config go 549 Neither kubeconfig nor master was specified Using the inClusterConfig This might not work I0405 16 53 46 070499 7 main go 205 Creating API client for https 10 96 0 1 443 I0405 16 53 46 077784 7 main go 249 Running in Kubernetes cluster version v1 10 v1 10 11 git clean commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 platform linux amd64 I0405 16 53 46 183359 7 nginx go 265 Starting NGINX Ingress controller I0405 16 53 46 193913 7 event go 209 Event v1 ObjectReference Kind ConfigMap Namespace ingress nginx Name udp services UID 82258915 563e 11e9 9c52 025000000001 APIVersion v1 ResourceVersion 494 FieldPath type Normal reason CREATE ConfigMap ingress nginx udp services ssh kubectl ingress nginx ssh is exactly the same as kubectl ingress nginx exec it bin bash Use it when you want to quickly be dropped into a shell inside a running ingress nginx container console kubectl ingress nginx ssh n ingress nginx www data ingress nginx controller 7cbf77c976 wx5pn etc nginx |
ingress nginx Do not move it without providing redirects This file is referenced in code as NOTICE https github com kubernetes ingress nginx blob main docs troubleshooting md Troubleshooting | <!--
-----------------NOTICE------------------------
This file is referenced in code as
https://github.com/kubernetes/ingress-nginx/blob/main/docs/troubleshooting.md
Do not move it without providing redirects.
-----------------------------------------------
-->
# Troubleshooting
## Ingress-Controller Logs and Events
There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting
methods to obtain more information.
### Check the Ingress Resource Events
```console
$ kubectl get ing -n <namespace-of-ingress-resource>
NAME HOSTS ADDRESS PORTS AGE
cafe-ingress cafe.com 10.0.2.15 80 25s
$ kubectl describe ing <ingress-resource-name> -n <namespace-of-ingress-resource>
Name: cafe-ingress
Namespace: default
Address: 10.0.2.15
Default backend: default-http-backend:80 (172.17.0.5:8080)
Rules:
Host Path Backends
---- ---- --------
cafe.com
/tea tea-svc:80 (<none>)
/coffee coffee-svc:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"name":"cafe-ingress","namespace":"default","selfLink":"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress"},"spec":{"rules":[{"host":"cafe.com","http":{"paths":[{"backend":{"serviceName":"tea-svc","servicePort":80},"path":"/tea"},{"backend":{"serviceName":"coffee-svc","servicePort":80},"path":"/coffee"}]}}]},"status":{"loadBalancer":{"ingress":[{"ip":"169.48.142.110"}]}}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress
Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress
```
### Check the Ingress Controller Logs
```console
$ kubectl get pods -n <namespace-of-ingress-controller>
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m
$ kubectl logs -n <namespace> ingress-nginx-controller-67956bf89d-fv58j
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.14.0
Build: git-734361d
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
....
```
### Check the Nginx Configuration
```console
$ kubectl get pods -n <namespace-of-ingress-controller>
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m
$ kubectl exec -it -n <namespace-of-ingress-controller> ingress-nginx-controller-67956bf89d-fv58j -- cat /etc/nginx/nginx.conf
daemon off;
worker_processes 2;
pid /run/nginx.pid;
worker_rlimit_nofile 523264;
worker_shutdown_timeout 240s;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
....
```
### Check if used Services Exist
```console
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default coffee-svc ClusterIP 10.106.154.35 <none> 80/TCP 18m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30m
default tea-svc ClusterIP 10.104.172.12 <none> 80/TCP 18m
kube-system default-http-backend NodePort 10.108.189.236 <none> 80:30001/TCP 30m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 30m
kube-system kubernetes-dashboard NodePort 10.103.128.17 <none> 80:30000/TCP 30m
```
## Debug Logging
Using the flag `--v=XX` it is possible to increase the level of logging. This is performed by editing
the deployment.
```console
$ kubectl get deploy -n <namespace-of-ingress-controller>
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default-http-backend 1 1 1 1 35m
ingress-nginx-controller 1 1 1 1 35m
$ kubectl edit deploy -n <namespace-of-ingress-controller> ingress-nginx-controller
# Add --v=X to "- args", where X is an integer
```
- `--v=2` shows details using `diff` about the changes in the configuration in nginx
- `--v=3` shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
- `--v=5` configures NGINX in [debug mode](https://nginx.org/en/docs/debugging_log.html)
## Authentication to the Kubernetes API Server
A number of components are involved in the authentication process and the first step is to narrow
down the source of the problem, namely whether it is a problem with service authentication or
with the kubeconfig file.
Both authentications must work:
```
+-------------+ service +------------+
| | authentication | |
+ apiserver +<-------------------+ ingress |
| | | controller |
+-------------+ +------------+
```
**Service authentication**
The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways:
* _Service Account:_ This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details.
* _Kubeconfig file:_ In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the `--kubeconfig` flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the `--kubeconfig` does not requires the flag `--apiserver-host`.
The format of the file is identical to `~/.kube/config` which is used by kubectl to connect to the API server. See 'kubeconfig' section for details.
* _Using the flag `--apiserver-host`:_ Using this flag `--apiserver-host=http://localhost:8080` it is possible to specify an unsecured API server or reach a remote kubernetes cluster using [kubectl proxy](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#proxy).
Please do not use this approach in production.
In the diagram below you can see the full authentication flow with all options, starting with the browser
on the lower left hand side.
```
Kubernetes Workstation
+---------------------------------------------------+ +------------------+
| | | |
| +-----------+ apiserver +------------+ | | +------------+ |
| | | proxy | | | | | | |
| | apiserver | | ingress | | | | ingress | |
| | | | controller | | | | controller | |
| | | | | | | | | |
| | | | | | | | | |
| | | service account/ | | | | | | |
| | | kubeconfig | | | | | | |
| | +<-------------------+ | | | | | |
| | | | | | | | | |
| +------+----+ kubeconfig +------+-----+ | | +------+-----+ |
| |<--------------------------------------------------------| |
| | | |
+---------------------------------------------------+ +------------------+
```
### Service Account
If using a service account to connect to the API server, the ingress-controller expects the file
`/var/run/secrets/kubernetes.io/serviceaccount/token` to be present. It provides a secret
token that is required to authenticate with the API server.
Verify with the following commands:
```console
# start a container that contains curl
$ kubectl run -it --rm test --image=curlimages/curl --restart=Never -- /bin/sh
# check if secret exists
/ $ ls /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt namespace token
/ $
# check base connectivity from cluster inside
/ $ curl -k https://kubernetes.default.svc.cluster.local
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}/ $
# connect using tokens
}/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc.cluster.local
&& echo
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
... TRUNCATED
"/readyz/shutdown",
"/version"
]
}
/ $
# when you type `exit` or `^D` the test pod will be deleted.
```
If it is not working, there are two possible reasons:
1. The contents of the tokens are invalid. Find the secret name with `kubectl get secrets | grep service-account` and
delete it with `kubectl delete secret <name>`. It will automatically be recreated.
2. You have a non-standard Kubernetes installation and the file containing the token may not be present.
The API server will mount a volume containing this file, but only if the API server is configured to use
the ServiceAccount admission controller.
If you experience this error, verify that your API server is using the ServiceAccount admission controller.
If you are configuring the API server by hand, you can set this with the `--admission-control` parameter.
> Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers.
More information:
- [User Guide: Service Accounts](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
- [Cluster Administrator Guide: Managing Service Accounts](http://kubernetes.io/docs/admin/service-accounts-admin/)
## Kube-Config
If you want to use a kubeconfig file for authentication, follow the [deploy procedure](deploy/index.md) and
add the flag `--kubeconfig=/etc/kubernetes/kubeconfig.yaml` to the args section of the deployment.
## Using GDB with Nginx
[Gdb](https://www.gnu.org/software/gdb/) can be used to with nginx to perform a configuration
dump. This allows us to see which configuration is being used, as well as older configurations.
Note: The below is based on the nginx [documentation](https://docs.nginx.com/nginx/admin-guide/monitoring/debugging/#dumping-nginx-configuration-from-a-running-process).
1. SSH into the worker
```console
$ ssh user@workerIP
```
2. Obtain the Docker Container Running nginx
```console
$ docker ps | grep ingress-nginx-controller
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d9e1d243156a registry.k8s.io/ingress-nginx/controller "/usr/bin/dumb-init …" 19 minutes ago Up 19 minutes k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0
```
3. Exec into the container
```console
$ docker exec -it --user=0 --privileged d9e1d243156a bash
```
4. Make sure nginx is running in `--with-debug`
```console
$ nginx -V 2>&1 | grep -- '--with-debug'
```
5. Get list of processes running on container
```console
$ ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres
root 5 1 0 20:23 ? 00:00:05 /ingress-nginx-controller --defa
root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/
nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process
nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process
root 172 0 0 20:43 pts/0 00:00:00 bash
```
6. Attach gdb to the nginx master process
```console
$ gdb -p 21
....
Attaching to process 21
Reading symbols from /usr/sbin/nginx...done.
....
(gdb)
```
7. Copy and paste the following:
```console
set $cd = ngx_cycle->config_dump
set $nelts = $cd.nelts
set $elts = (ngx_conf_dump_t*)($cd.elts)
while ($nelts-- > 0)
set $name = $elts[$nelts]->name.data
printf "Dumping %s to nginx_conf.txt\n", $name
append memory nginx_conf.txt \
$elts[$nelts]->buffer.start $elts[$nelts]->buffer.end
end
```
8. Quit GDB by pressing CTRL+D
9. Open nginx_conf.txt
```console
cat nginx_conf.txt
```
## Image related issues faced on Nginx 4.2.5 or other versions (Helm chart versions)
1. Incase you face below error while installing Nginx using helm chart (either by helm commands or helm_release terraform provider )
```
Warning Failed 5m5s (x4 over 6m34s) kubelet Failed to pull image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47": rpc error: code = Unknown desc = failed to pull and unpack image "registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47": failed to resolve reference "registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47": failed to do request: Head "https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47": EOF
```
Then please follow the below steps.
2. During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories details
a. curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null
```
(⎈ |myprompt)➜ ~ curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
(⎈ |myprompt)➜ ~
```
b. curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
```
(⎈ |myprompt)➜ ~ curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
HTTP/2 200
docker-distribution-api-version: registry/2.0
content-type: application/vnd.docker.distribution.manifest.list.v2+json
docker-content-digest: sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
content-length: 1384
date: Wed, 28 Sep 2022 16:46:28 GMT
server: Docker Registry
x-xss-protection: 0
x-frame-options: SAMEORIGIN
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
(⎈ |myprompt)➜ ~
```
Redirection in the proxy is implemented to ensure the pulling of the images.
3. This is the solution recommended to whitelist the below image repositories :
```
*.appspot.com
*.k8s.io
*.pkg.dev
*.gcr.io
```
More details about the above repos :
a. *.k8s.io -> To ensure you can pull any images from registry.k8s.io
b. *.gcr.io -> GCP services are used for image hosting. This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services.
c. *.appspot.com -> This a Google domain. part of the domain used for GCR.
## Unable to listen on port (80/443)
One possible reason for this error is lack of permission to bind to the port. Ports 80, 443, and any other port < 1024 are Linux privileged ports which historically could only be bound by root. The ingress-nginx-controller uses the CAP_NET_BIND_SERVICE [linux capability](https://man7.org/linux/man-pages/man7/capabilities.7.html) to allow binding these ports as a normal user (www-data / 101). This involves two components:
1. In the image, the /nginx-ingress-controller file has the cap_net_bind_service capability added (e.g. via [setcap](https://man7.org/linux/man-pages/man8/setcap.8.html))
2. The NET_BIND_SERVICE capability is added to the container in the containerSecurityContext of the deployment.
If encountering this on one/some node(s) and not on others, try to purge and pull a fresh copy of the image to the affected node(s), in case there has been corruption of the underlying layers to lose the capability on the executable.
### Create a test pod
The /nginx-ingress-controller process exits/crashes when encountering this error, making it difficult to troubleshoot what is happening inside the container. To get around this, start an equivalent container running "sleep 3600", and exec into it for further troubleshooting. For example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: ingress-nginx-sleep
namespace: default
labels:
app: nginx
spec:
containers:
- name: nginx
image: ##_CONTROLLER_IMAGE_##
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1"
command: ["sleep"]
args: ["3600"]
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
restartPolicy: Never
nodeSelector:
kubernetes.io/hostname: ##_NODE_NAME_##
tolerations:
- key: "node.kubernetes.io/unschedulable"
operator: "Exists"
effect: NoSchedule
```
* update the namespace if applicable/desired
* replace `##_NODE_NAME_##` with the problematic node (or remove nodeSelector section if problem is not confined to one node)
* replace `##_CONTROLLER_IMAGE_##` with the same image as in use by your ingress-nginx deployment
* confirm the securityContext section matches what is in place for ingress-nginx-controller pods in your cluster
Apply the YAML and open a shell into the pod.
Try to manually run the controller process:
```console
$ /nginx-ingress-controller
```
You should get the same error as from the ingress controller pod logs.
Confirm the capabilities are properly surfacing into the pod:
```console
$ grep CapBnd /proc/1/status
CapBnd: 0000000000000400
```
The above value has only net_bind_service enabled (per security context in YAML which adds that and drops all). If you get a different value, then you can decode it on another linux box (capsh not available in this container) like below, and then figure out why specified capabilities are not propagating into the pod/container.
```console
$ capsh --decode=0000000000000400
0x0000000000000400=cap_net_bind_service
```
## Create a test pod as root
(Note, this may be restricted by PodSecurityAdmission/Standards, OPA Gatekeeper, etc. in which case you will need to do the appropriate workaround for testing, e.g. deploy in a new namespace without the restrictions.)
To test further you may want to install additional utilities, etc. Modify the pod yaml by:
* changing runAsUser from 101 to 0
* removing the "drop..ALL" section from the capabilities.
Some things to try after shelling into this container:
Try running the controller as the www-data (101) user:
```console
$ chmod 4755 /nginx-ingress-controller
$ /nginx-ingress-controller
```
Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context.
Install the libcap package and check capabilities on the file:
```console
$ apk add libcap
(1/1) Installing libcap (2.50-r0)
Executing busybox-1.33.1-r7.trigger
OK: 26 MiB in 41 packages
$ getcap /nginx-ingress-controller
/nginx-ingress-controller cap_net_bind_service=ep
```
(if missing, see above about purging image on the server and re-pulling)
Strace the executable to see what system calls are being executed when it fails:
```console
$ apk add strace
(1/1) Installing strace (5.12-r0)
Executing busybox-1.33.1-r7.trigger
OK: 28 MiB in 42 packages
$ strace /nginx-ingress-controller
execve("/nginx-ingress-controller", ["/nginx-ingress-controller"], 0x7ffeb9eb3240 /* 131 vars */) = 0
arch_prctl(ARCH_SET_FS, 0x29ea690) = 0
...
``` | ingress nginx | NOTICE This file is referenced in code as https github com kubernetes ingress nginx blob main docs troubleshooting md Do not move it without providing redirects Troubleshooting Ingress Controller Logs and Events There are many ways to troubleshoot the ingress controller The following are basic troubleshooting methods to obtain more information Check the Ingress Resource Events console kubectl get ing n namespace of ingress resource NAME HOSTS ADDRESS PORTS AGE cafe ingress cafe com 10 0 2 15 80 25s kubectl describe ing ingress resource name n namespace of ingress resource Name cafe ingress Namespace default Address 10 0 2 15 Default backend default http backend 80 172 17 0 5 8080 Rules Host Path Backends cafe com tea tea svc 80 none coffee coffee svc 80 none Annotations kubectl kubernetes io last applied configuration apiVersion networking k8s io v1 kind Ingress metadata annotations name cafe ingress namespace default selfLink apis networking v1 namespaces default ingresses cafe ingress spec rules host cafe com http paths backend serviceName tea svc servicePort 80 path tea backend serviceName coffee svc servicePort 80 path coffee status loadBalancer ingress ip 169 48 142 110 Events Type Reason Age From Message Normal CREATE 1m ingress nginx controller Ingress default cafe ingress Normal UPDATE 58s ingress nginx controller Ingress default cafe ingress Check the Ingress Controller Logs console kubectl get pods n namespace of ingress controller NAME READY STATUS RESTARTS AGE ingress nginx controller 67956bf89d fv58j 1 1 Running 0 1m kubectl logs n namespace ingress nginx controller 67956bf89d fv58j NGINX Ingress controller Release 0 14 0 Build git 734361d Repository https github com kubernetes ingress nginx Check the Nginx Configuration console kubectl get pods n namespace of ingress controller NAME READY STATUS RESTARTS AGE ingress nginx controller 67956bf89d fv58j 1 1 Running 0 1m kubectl exec it n namespace of ingress controller ingress nginx controller 67956bf89d fv58j cat etc nginx nginx conf daemon off worker processes 2 pid run nginx pid worker rlimit nofile 523264 worker shutdown timeout 240s events multi accept on worker connections 16384 use epoll http Check if used Services Exist console kubectl get svc all namespaces NAMESPACE NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE default coffee svc ClusterIP 10 106 154 35 none 80 TCP 18m default kubernetes ClusterIP 10 96 0 1 none 443 TCP 30m default tea svc ClusterIP 10 104 172 12 none 80 TCP 18m kube system default http backend NodePort 10 108 189 236 none 80 30001 TCP 30m kube system kube dns ClusterIP 10 96 0 10 none 53 UDP 53 TCP 30m kube system kubernetes dashboard NodePort 10 103 128 17 none 80 30000 TCP 30m Debug Logging Using the flag v XX it is possible to increase the level of logging This is performed by editing the deployment console kubectl get deploy n namespace of ingress controller NAME DESIRED CURRENT UP TO DATE AVAILABLE AGE default http backend 1 1 1 1 35m ingress nginx controller 1 1 1 1 35m kubectl edit deploy n namespace of ingress controller ingress nginx controller Add v X to args where X is an integer v 2 shows details using diff about the changes in the configuration in nginx v 3 shows details about the service Ingress rule endpoint changes and it dumps the nginx configuration in JSON format v 5 configures NGINX in debug mode https nginx org en docs debugging log html Authentication to the Kubernetes API Server A number of components are involved in the authentication process and the first step is to narrow down the source of the problem namely whether it is a problem with service authentication or with the kubeconfig file Both authentications must work service authentication apiserver ingress controller Service authentication The Ingress controller needs information from apiserver Therefore authentication is required which can be achieved in a couple of ways Service Account This is recommended because nothing has to be configured The Ingress controller will use information provided by the system to communicate with the API server See Service Account section for details Kubeconfig file In some Kubernetes environments service accounts are not available In this case a manual configuration is required The Ingress controller binary can be started with the kubeconfig flag The value of the flag is a path to a file specifying how to connect to the API server Using the kubeconfig does not requires the flag apiserver host The format of the file is identical to kube config which is used by kubectl to connect to the API server See kubeconfig section for details Using the flag apiserver host Using this flag apiserver host http localhost 8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy https kubernetes io docs reference generated kubectl kubectl commands proxy Please do not use this approach in production In the diagram below you can see the full authentication flow with all options starting with the browser on the lower left hand side Kubernetes Workstation apiserver proxy apiserver ingress ingress controller controller service account kubeconfig kubeconfig Service Account If using a service account to connect to the API server the ingress controller expects the file var run secrets kubernetes io serviceaccount token to be present It provides a secret token that is required to authenticate with the API server Verify with the following commands console start a container that contains curl kubectl run it rm test image curlimages curl restart Never bin sh check if secret exists ls var run secrets kubernetes io serviceaccount ca crt namespace token check base connectivity from cluster inside curl k https kubernetes default svc cluster local kind Status apiVersion v1 metadata status Failure message forbidden User system anonymous cannot get path reason Forbidden details code 403 connect using tokens curl cacert var run secrets kubernetes io serviceaccount ca crt H Authorization Bearer cat var run secrets kubernetes io serviceaccount token https kubernetes default svc cluster local echo paths api api v1 apis apis TRUNCATED readyz shutdown version when you type exit or D the test pod will be deleted If it is not working there are two possible reasons 1 The contents of the tokens are invalid Find the secret name with kubectl get secrets grep service account and delete it with kubectl delete secret name It will automatically be recreated 2 You have a non standard Kubernetes installation and the file containing the token may not be present The API server will mount a volume containing this file but only if the API server is configured to use the ServiceAccount admission controller If you experience this error verify that your API server is using the ServiceAccount admission controller If you are configuring the API server by hand you can set this with the admission control parameter Note that you should use other admission controllers as well Before configuring this option you should read about admission controllers More information User Guide Service Accounts https kubernetes io docs tasks configure pod container configure service account Cluster Administrator Guide Managing Service Accounts http kubernetes io docs admin service accounts admin Kube Config If you want to use a kubeconfig file for authentication follow the deploy procedure deploy index md and add the flag kubeconfig etc kubernetes kubeconfig yaml to the args section of the deployment Using GDB with Nginx Gdb https www gnu org software gdb can be used to with nginx to perform a configuration dump This allows us to see which configuration is being used as well as older configurations Note The below is based on the nginx documentation https docs nginx com nginx admin guide monitoring debugging dumping nginx configuration from a running process 1 SSH into the worker console ssh user workerIP 2 Obtain the Docker Container Running nginx console docker ps grep ingress nginx controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a registry k8s io ingress nginx controller usr bin dumb init 19 minutes ago Up 19 minutes k8s ingress nginx controller ingress nginx controller 67956bf89d mqxzt kube system 079f31ec aa37 11e8 ad39 080027a227db 0 3 Exec into the container console docker exec it user 0 privileged d9e1d243156a bash 4 Make sure nginx is running in with debug console nginx V 2 1 grep with debug 5 Get list of processes running on container console ps ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20 23 00 00 00 usr bin dumb init nginx ingres root 5 1 0 20 23 00 00 05 ingress nginx controller defa root 21 5 0 20 23 00 00 00 nginx master process usr sbin nobody 106 21 0 20 23 00 00 00 nginx worker process nobody 107 21 0 20 23 00 00 00 nginx worker process root 172 0 0 20 43 pts 0 00 00 00 bash 6 Attach gdb to the nginx master process console gdb p 21 Attaching to process 21 Reading symbols from usr sbin nginx done gdb 7 Copy and paste the following console set cd ngx cycle config dump set nelts cd nelts set elts ngx conf dump t cd elts while nelts 0 set name elts nelts name data printf Dumping s to nginx conf txt n name append memory nginx conf txt elts nelts buffer start elts nelts buffer end end 8 Quit GDB by pressing CTRL D 9 Open nginx conf txt console cat nginx conf txt Image related issues faced on Nginx 4 2 5 or other versions Helm chart versions 1 Incase you face below error while installing Nginx using helm chart either by helm commands or helm release terraform provider Warning Failed 5m5s x4 over 6m34s kubelet Failed to pull image registry k8s io ingress nginx kube webhook certgen v1 3 0 sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 rpc error code Unknown desc failed to pull and unpack image registry k8s io ingress nginx kube webhook certgen sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 failed to resolve reference registry k8s io ingress nginx kube webhook certgen sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 failed to do request Head https eu gcr io v2 k8s artifacts prod ingress nginx kube webhook certgen manifests sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 EOF Then please follow the below steps 2 During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories details a curl registry k8s io ingress nginx kube webhook certgen sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 dev null myprompt curl registry k8s io ingress nginx kube webhook certgen sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 dev null Total Received Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 0 myprompt b curl I https eu gcr io v2 k8s artifacts prod ingress nginx kube webhook certgen manifests sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 myprompt curl I https eu gcr io v2 k8s artifacts prod ingress nginx kube webhook certgen manifests sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 HTTP 2 200 docker distribution api version registry 2 0 content type application vnd docker distribution manifest list v2 json docker content digest sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 content length 1384 date Wed 28 Sep 2022 16 46 28 GMT server Docker Registry x xss protection 0 x frame options SAMEORIGIN alt svc h3 443 ma 2592000 h3 29 443 ma 2592000 h3 Q050 443 ma 2592000 h3 Q046 443 ma 2592000 h3 Q043 443 ma 2592000 quic 443 ma 2592000 v 46 43 myprompt Redirection in the proxy is implemented to ensure the pulling of the images 3 This is the solution recommended to whitelist the below image repositories appspot com k8s io pkg dev gcr io More details about the above repos a k8s io To ensure you can pull any images from registry k8s io b gcr io GCP services are used for image hosting This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services c appspot com This a Google domain part of the domain used for GCR Unable to listen on port 80 443 One possible reason for this error is lack of permission to bind to the port Ports 80 443 and any other port 1024 are Linux privileged ports which historically could only be bound by root The ingress nginx controller uses the CAP NET BIND SERVICE linux capability https man7 org linux man pages man7 capabilities 7 html to allow binding these ports as a normal user www data 101 This involves two components 1 In the image the nginx ingress controller file has the cap net bind service capability added e g via setcap https man7 org linux man pages man8 setcap 8 html 2 The NET BIND SERVICE capability is added to the container in the containerSecurityContext of the deployment If encountering this on one some node s and not on others try to purge and pull a fresh copy of the image to the affected node s in case there has been corruption of the underlying layers to lose the capability on the executable Create a test pod The nginx ingress controller process exits crashes when encountering this error making it difficult to troubleshoot what is happening inside the container To get around this start an equivalent container running sleep 3600 and exec into it for further troubleshooting For example yaml apiVersion v1 kind Pod metadata name ingress nginx sleep namespace default labels app nginx spec containers name nginx image CONTROLLER IMAGE resources requests memory 512Mi cpu 500m limits memory 1Gi cpu 1 command sleep args 3600 ports containerPort 80 name http protocol TCP containerPort 443 name https protocol TCP securityContext allowPrivilegeEscalation true capabilities add NET BIND SERVICE drop ALL runAsUser 101 restartPolicy Never nodeSelector kubernetes io hostname NODE NAME tolerations key node kubernetes io unschedulable operator Exists effect NoSchedule update the namespace if applicable desired replace NODE NAME with the problematic node or remove nodeSelector section if problem is not confined to one node replace CONTROLLER IMAGE with the same image as in use by your ingress nginx deployment confirm the securityContext section matches what is in place for ingress nginx controller pods in your cluster Apply the YAML and open a shell into the pod Try to manually run the controller process console nginx ingress controller You should get the same error as from the ingress controller pod logs Confirm the capabilities are properly surfacing into the pod console grep CapBnd proc 1 status CapBnd 0000000000000400 The above value has only net bind service enabled per security context in YAML which adds that and drops all If you get a different value then you can decode it on another linux box capsh not available in this container like below and then figure out why specified capabilities are not propagating into the pod container console capsh decode 0000000000000400 0x0000000000000400 cap net bind service Create a test pod as root Note this may be restricted by PodSecurityAdmission Standards OPA Gatekeeper etc in which case you will need to do the appropriate workaround for testing e g deploy in a new namespace without the restrictions To test further you may want to install additional utilities etc Modify the pod yaml by changing runAsUser from 101 to 0 removing the drop ALL section from the capabilities Some things to try after shelling into this container Try running the controller as the www data 101 user console chmod 4755 nginx ingress controller nginx ingress controller Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context Install the libcap package and check capabilities on the file console apk add libcap 1 1 Installing libcap 2 50 r0 Executing busybox 1 33 1 r7 trigger OK 26 MiB in 41 packages getcap nginx ingress controller nginx ingress controller cap net bind service ep if missing see above about purging image on the server and re pulling Strace the executable to see what system calls are being executed when it fails console apk add strace 1 1 Installing strace 5 12 r0 Executing busybox 1 33 1 r7 trigger OK 28 MiB in 42 packages strace nginx ingress controller execve nginx ingress controller nginx ingress controller 0x7ffeb9eb3240 131 vars 0 arch prctl ARCH SET FS 0x29ea690 0 |
ingress nginx e2e test suite for This file is autogenerated Do not try to edit it manually | <!---
This file is autogenerated!
Do not try to edit it manually.
-->
# e2e test suite for [Ingress NGINX Controller](https://github.com/kubernetes/ingress-nginx/tree/main/)
### [[Admission] admission controller](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L39)
- [should not allow overlaps of host and paths without canary annotations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L47)
- [should allow overlaps of host and paths with canary annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L64)
- [should block ingress with invalid path](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L85)
- [should return an error if there is an error validating the ingress definition](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L102)
- [should return an error if there is an invalid value in some annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L116)
- [should return an error if there is a forbidden value in some annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L130)
- [should return an error if there is an invalid path and wrong pathType is set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L144)
- [should not return an error if the Ingress V1 definition is valid with Ingress Class](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L178)
- [should not return an error if the Ingress V1 definition is valid with IngressClass annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L194)
- [should return an error if the Ingress V1 definition contains invalid annotations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L210)
- [should not return an error for an invalid Ingress when it has unknown class](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L224)
### [affinity session-cookie-name](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L43)
- [should set sticky cookie SERVERID](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L50)
- [should change cookie name on ingress definition change](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L72)
- [should set the path to /something on the generated cookie](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L107)
- [does not set the path to / on the generated cookie if there's more than one rule referring to the same backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L129)
- [should set cookie with expires](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L202)
- [should set cookie with domain](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L234)
- [should not set cookie without domain annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L257)
- [should work with use-regex annotation and session-cookie-path](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L279)
- [should warn user when use-regex is true and session-cookie-path is not set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L303)
- [should not set affinity across all server locations when using separate ingresses](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L329)
- [should set sticky cookie without host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L361)
- [should work with server-alias annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L381)
- [should set secure in cookie with provided true annotation on http](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L421)
- [should not set secure in cookie with provided false annotation on http](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L444)
- [should set secure in cookie with provided false annotation on https](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L467)
### [affinitymode](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinitymode.go#L33)
- [Balanced affinity mode should balance](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinitymode.go#L36)
- [Check persistent affinity mode](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinitymode.go#L69)
### [server-alias](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/alias.go#L31)
- [should return status code 200 for host 'foo' and 404 for 'bar'](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/alias.go#L38)
- [should return status code 200 for host 'foo' and 'bar'](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/alias.go#L64)
- [should return status code 200 for hosts defined in two ingresses, different path with one alias](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/alias.go#L89)
### [app-root](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/approot.go#L28)
- [should redirect to /foo](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/approot.go#L35)
### [auth-*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L45)
- [should return status code 200 when no authentication is configured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L52)
- [should return status code 503 when authentication is configured with an invalid secret](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L71)
- [should return status code 401 when authentication is configured but Authorization header is not configured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L95)
- [should return status code 401 when authentication is configured and Authorization header is sent with invalid credentials](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L122)
- [should return status code 401 and cors headers when authentication and cors is configured but Authorization header is not configured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L150)
- [should return status code 200 when authentication is configured and Authorization header is sent](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L178)
- [should return status code 200 when authentication is configured with a map and Authorization header is sent](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L205)
- [should return status code 401 when authentication is configured with invalid content and Authorization header is sent](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L233)
- [proxy_set_header My-Custom-Header 42;](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L272)
- [proxy_set_header My-Custom-Header 42;](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L292)
- [proxy_set_header 'My-Custom-Header' '42';](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L311)
- [user retains cookie by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L420)
- [user does not retain cookie if upstream returns error status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L431)
- [user with annotated ingress retains cookie if upstream returns error status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L442)
- [should return status code 200 when signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L481)
- [should redirect to signin url when not signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L490)
- [keeps processing new ingresses even if one of the existing ingresses is misconfigured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L501)
- [should overwrite Foo header with auth response](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L525)
- [should return status code 200 when signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L701)
- [should redirect to signin url when not signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L710)
- [keeps processing new ingresses even if one of the existing ingresses is misconfigured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L721)
- [should return status code 200 when signed in after auth backend is deleted ](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L780)
- [should deny login for different location on same server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L800)
- [should deny login for different servers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L828)
- [should redirect to signin url when not signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L857)
- [should return 503 (location was denied)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L887)
- [should add error to the config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L895)
### [auth-tls-*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L31)
- [should set sslClientCertificate, sslVerifyClient and sslVerifyDepth with auth-tls-secret](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L38)
- [should set valid auth-tls-secret, sslVerify to off, and sslVerifyDepth to 2](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L86)
- [should 302 redirect to error page instead of 400 when auth-tls-error-page is set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L116)
- [should pass URL-encoded certificate to upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L163)
- [should validate auth-tls-verify-client](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L208)
- [should return 403 using auth-tls-match-cn with no matching CN from client](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L267)
- [should return 200 using auth-tls-match-cn with matching CN from client](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L296)
- [should reload the nginx config when auth-tls-match-cn is updated](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L325)
- [should return 200 using auth-tls-match-cn where atleast one of the regex options matches CN from client](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L368)
### [backend-protocol](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L29)
- [should set backend protocol to https:// and use proxy_pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L36)
- [should set backend protocol to https:// and use proxy_pass with lowercase annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L51)
- [should set backend protocol to $scheme:// and use proxy_pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L66)
- [should set backend protocol to grpc:// and use grpc_pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L81)
- [should set backend protocol to grpcs:// and use grpc_pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L96)
- [should set backend protocol to '' and use fastcgi_pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L111)
### [canary-*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L36)
- [should response with a 200 status from the mainline upstream when requests are made to the mainline ingress](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L45)
- [should return 404 status for requests to the canary if no matching ingress is found](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L89)
- [should return the correct status codes when endpoints are unavailable](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L120)
- [should route requests to the correct upstream if mainline ingress is created before the canary ingress](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L174)
- [should route requests to the correct upstream if mainline ingress is created after the canary ingress](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L232)
- [should route requests to the correct upstream if the mainline ingress is modified](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L289)
- [should route requests to the correct upstream if the canary ingress is modified](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L363)
- [should route requests to the correct upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L445)
- [should route requests to the correct upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L513)
- [should route requests to the correct upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L594)
- [should route requests to the correct upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L647)
- [should routes to mainline upstream when the given Regex causes error](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L692)
- [should route requests to the correct upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L741)
- [respects always and never values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L790)
- [should route requests only to mainline if canary weight is 0](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L862)
- [should route requests only to canary if canary weight is 100](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L910)
- [should route requests only to canary if canary weight is equal to canary weight total](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L952)
- [should route requests split between mainline and canary if canary weight is 50](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L995)
- [should route requests split between mainline and canary if canary weight is 100 and weight total is 200](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1031)
- [should not use canary as a catch-all server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1070)
- [should not use canary with domain as a server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1104)
- [does not crash when canary ingress has multiple paths to the same non-matching backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1138)
- [always routes traffic to canary if first request was affinitized to canary (default behavior)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1175)
- [always routes traffic to canary if first request was affinitized to canary (explicit sticky behavior)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1242)
- [routes traffic to either mainline or canary backend (legacy behavior)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1310)
### [client-body-buffer-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L30)
- [should set client_body_buffer_size to 1000](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L37)
- [should set client_body_buffer_size to 1K](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L59)
- [should set client_body_buffer_size to 1k](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L81)
- [should set client_body_buffer_size to 1m](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L103)
- [should set client_body_buffer_size to 1M](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L125)
- [should not set client_body_buffer_size to invalid 1b](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L147)
### [connection-proxy-header](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/connection.go#L28)
- [set connection header to keep-alive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/connection.go#L35)
### [cors-*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L33)
- [should enable cors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L40)
- [should set cors methods to only allow POST, GET](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L67)
- [should set cors max-age](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L83)
- [should disable cors allow credentials](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L99)
- [should allow origin for cors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L115)
- [should allow headers for cors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L142)
- [should expose headers for cors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L158)
- [should allow - single origin for multiple cors values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L174)
- [should not allow - single origin for multiple cors values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L201)
- [should allow correct origins - single origin for multiple cors values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L221)
- [should not break functionality](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L272)
- [should not break functionality - without `*`](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L296)
- [should not break functionality with extra domain](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L319)
- [should not match](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L343)
- [should allow - single origin with required port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L363)
- [should not allow - single origin with port and origin without port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L391)
- [should not allow - single origin without port and origin with required port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L410)
- [should allow - matching origin with wildcard origin (2 subdomains)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L430)
- [should not allow - unmatching origin with wildcard origin (2 subdomains)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L473)
- [should allow - matching origin+port with wildcard origin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L493)
- [should not allow - portless origin with wildcard origin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L520)
- [should allow correct origins - missing subdomain + origin with wildcard origin and correct origin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L540)
- [should allow - missing origins (should allow all origins)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L576)
- [should allow correct origin but not others - cors allow origin annotations contain trailing comma](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L636)
- [should allow - origins with non-http[s] protocols](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L673)
### [custom-headers-*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customheaders.go#L33)
- [should return status code 200 when no custom-headers is configured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customheaders.go#L40)
- [should return status code 503 when custom-headers is configured with an invalid secret](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customheaders.go#L57)
- [more_set_headers 'My-Custom-Header' '42';](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customheaders.go#L78)
### [custom-http-errors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customhttperrors.go#L34)
- [configures Nginx correctly](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customhttperrors.go#L41)
### [default-backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/default_backend.go#L29)
- [should use a custom default backend as upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/default_backend.go#L37)
### [disable-access-log disable-http-access-log disable-stream-access-log](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableaccesslog.go#L28)
- [disable-access-log set access_log off](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableaccesslog.go#L35)
- [disable-http-access-log set access_log off](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableaccesslog.go#L53)
- [disable-stream-access-log set access_log off](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableaccesslog.go#L71)
### [disable-proxy-intercept-errors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableproxyintercepterrors.go#L31)
- [configures Nginx correctly](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableproxyintercepterrors.go#L39)
### [backend-protocol - FastCGI](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fastcgi.go#L30)
- [should use fastcgi_pass in the configuration file](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fastcgi.go#L37)
- [should add fastcgi_index in the configuration file](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fastcgi.go#L54)
- [should add fastcgi_param in the configuration file](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fastcgi.go#L71)
- [should return OK for service with backend protocol FastCGI](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fastcgi.go#L102)
### [force-ssl-redirect](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/forcesslredirect.go#L27)
- [should redirect to https](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/forcesslredirect.go#L34)
### [from-to-www-redirect](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fromtowwwredirect.go#L31)
- [should redirect from www HTTP to HTTP](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fromtowwwredirect.go#L38)
- [should redirect from www HTTPS to HTTPS](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fromtowwwredirect.go#L64)
### [backend-protocol - GRPC](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L45)
- [should use grpc_pass in the configuration file](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L48)
- [should return OK for service with backend protocol GRPC](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L71)
- [authorization metadata should be overwritten by external auth response headers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L132)
- [should return OK for service with backend protocol GRPCS](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L193)
- [should return OK when request not exceed timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L260)
- [should return Error when request exceed timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L303)
### [http2-push-preload](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/http2pushpreload.go#L27)
- [enable the http2-push-preload directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/http2pushpreload.go#L34)
### [allowlist-source-range](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipallowlist.go#L27)
- [should set valid ip allowlist range](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipallowlist.go#L34)
### [denylist-source-range](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipdenylist.go#L28)
- [only deny explicitly denied IPs, allow all others](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipdenylist.go#L35)
- [only allow explicitly allowed IPs, deny all others](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipdenylist.go#L86)
### [Annotation - limit-connections](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitconnections.go#L31)
- [should limit-connections](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitconnections.go#L38)
### [limit-rate](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitrate.go#L29)
- [Check limit-rate annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitrate.go#L37)
### [enable-access-log enable-rewrite-log](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/log.go#L27)
- [set access_log off](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/log.go#L34)
- [set rewrite_log on](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/log.go#L49)
### [mirror-*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/mirror.go#L28)
- [should set mirror-target to http://localhost/mirror](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/mirror.go#L36)
- [should set mirror-target to https://test.env.com/$request_uri](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/mirror.go#L51)
- [should disable mirror-request-body](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/mirror.go#L67)
### [modsecurity owasp](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L39)
- [should enable modsecurity](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L46)
- [should enable modsecurity with transaction ID and OWASP rules](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L64)
- [should disable modsecurity](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L85)
- [should enable modsecurity with snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L102)
- [should enable modsecurity without using 'modsecurity on;'](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L124)
- [should disable modsecurity using 'modsecurity off;'](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L147)
- [should enable modsecurity with snippet and block requests](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L169)
- [should enable modsecurity globally and with modsecurity-snippet block requests](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L202)
- [should enable modsecurity when enable-owasp-modsecurity-crs is set to true](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L235)
- [should enable modsecurity through the config map](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L269)
- [should enable modsecurity through the config map but ignore snippet as disabled by admin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L309)
- [should disable default modsecurity conf setting when modsecurity-snippet is specified](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L354)
### [preserve-trailing-slash](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/preservetrailingslash.go#L27)
- [should allow preservation of trailing slashes](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/preservetrailingslash.go#L34)
### [proxy-*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L30)
- [should set proxy_redirect to off](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L38)
- [should set proxy_redirect to default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L54)
- [should set proxy_redirect to hello.com goodbye.com](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L70)
- [should set proxy client-max-body-size to 8m](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L87)
- [should not set proxy client-max-body-size to incorrect value](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L102)
- [should set valid proxy timeouts](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L117)
- [should not set invalid proxy timeouts](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L138)
- [should turn on proxy-buffering](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L159)
- [should turn off proxy-request-buffering](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L181)
- [should build proxy next upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L196)
- [should setup proxy cookies](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L217)
- [should change the default proxy HTTP version](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L235)
### [proxy-ssl-*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L32)
- [should set valid proxy-ssl-secret](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L39)
- [should set valid proxy-ssl-secret, proxy-ssl-verify to on, proxy-ssl-verify-depth to 2, and proxy-ssl-server-name to on](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L66)
- [should set valid proxy-ssl-secret, proxy-ssl-ciphers to HIGH:!AES](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L96)
- [should set valid proxy-ssl-secret, proxy-ssl-protocols](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L124)
- [proxy-ssl-location-only flag should change the nginx config server part](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L152)
### [permanent-redirect permanent-redirect-code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/redirect.go#L30)
- [should respond with a standard redirect code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/redirect.go#L33)
- [should respond with a custom redirect code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/redirect.go#L61)
### [rewrite-target use-regex enable-rewrite-log](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L32)
- [should write rewrite logs](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L39)
- [should use correct longest path match](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L68)
- [should use ~* location modifier if regex annotation is present](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L113)
- [should fail to use longest match for documented warning](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L160)
- [should allow for custom rewrite parameters](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L192)
### [satisfy](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/satisfy.go#L33)
- [should configure satisfy directive correctly](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/satisfy.go#L40)
- [should allow multiple auth with satisfy any](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/satisfy.go#L82)
### [server-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/serversnippet.go#L28)
### [service-upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/serviceupstream.go#L32)
- [should use the Service Cluster IP and Port ](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/serviceupstream.go#L41)
- [should use the Service Cluster IP and Port ](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/serviceupstream.go#L69)
- [should not use the Service Cluster IP and Port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/serviceupstream.go#L97)
### [configuration-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/snippet.go#L28)
- [set snippet more_set_headers in all locations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/snippet.go#L34)
- [drops snippet more_set_header in all locations if disabled by admin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/snippet.go#L66)
### [ssl-ciphers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/sslciphers.go#L28)
- [should change ssl ciphers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/sslciphers.go#L35)
- [should keep ssl ciphers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/sslciphers.go#L58)
### [stream-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/streamsnippet.go#L34)
- [should add value of stream-snippet to nginx config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/streamsnippet.go#L41)
- [should add stream-snippet and drop annotations per admin config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/streamsnippet.go#L88)
### [upstream-hash-by-*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/upstreamhashby.go#L79)
- [should connect to the same pod](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/upstreamhashby.go#L86)
- [should connect to the same subset of pods](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/upstreamhashby.go#L95)
### [upstream-vhost](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/upstreamvhost.go#L27)
- [set host to upstreamvhost.bar.com](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/upstreamvhost.go#L34)
### [x-forwarded-prefix](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/xforwardedprefix.go#L28)
- [should set the X-Forwarded-Prefix to the annotation value](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/xforwardedprefix.go#L35)
- [should not add X-Forwarded-Prefix if the annotation value is empty](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/xforwardedprefix.go#L57)
### [[CGroups] cgroups](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/cgroups/cgroups.go#L32)
- [detects cgroups version v1](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/cgroups/cgroups.go#L40)
- [detect cgroups version v2](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/cgroups/cgroups.go#L83)
### [Debug CLI](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L29)
- [should list the backend servers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L37)
- [should get information for a specific backend server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L56)
- [should produce valid JSON for /dbg general](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L85)
### [[Default Backend] custom service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/custom_default_backend.go#L33)
- [uses custom default backend that returns 200 as status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/custom_default_backend.go#L36)
### [[Default Backend]](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/default_backend.go#L30)
- [should return 404 sending requests when only a default backend is running](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/default_backend.go#L33)
- [enables access logging for default backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/default_backend.go#L88)
- [disables access logging for default backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/default_backend.go#L105)
### [[Default Backend] SSL](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/ssl.go#L26)
- [should return a self generated SSL certificate](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/ssl.go#L29)
### [[Default Backend] change default settings](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/with_hosts.go#L30)
- [should apply the annotation to the default backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/with_hosts.go#L38)
### [[Disable Leader] Routing works when leader election was disabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/disableleaderelection/disable_leader.go#L28)
- [should create multiple ingress routings rules when leader election has disabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/disableleaderelection/disable_leader.go#L35)
### [[Endpointslices] long service name](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/endpointslices/longname.go#L29)
- [should return 200 when service name has max allowed number of characters 63](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/endpointslices/longname.go#L38)
### [[TopologyHints] topology aware routing](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/endpointslices/topology.go#L34)
- [should return 200 when service has topology hints](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/endpointslices/topology.go#L42)
### [[Shutdown] Grace period shutdown](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/grace_period.go#L32)
- [/healthz should return status code 500 during shutdown grace period](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/grace_period.go#L35)
### [[Shutdown] ingress controller](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/shutdown.go#L30)
- [should shutdown in less than 60 seconds without pending connections](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/shutdown.go#L40)
### [[Shutdown] Graceful shutdown with pending request](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/slow_requests.go#L25)
- [should let slow requests finish before shutting down](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/slow_requests.go#L33)
### [[Ingress] DeepInspection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/deep_inspection.go#L27)
- [should drop whole ingress if one path matches invalid regex](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/deep_inspection.go#L34)
### [single ingress - multiple hosts](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/multiple_rules.go#L30)
- [should set the correct $service_name NGINX variable](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/multiple_rules.go#L38)
### [[Ingress] [PathType] exact](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype_exact.go#L30)
- [should choose exact location for /exact](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype_exact.go#L37)
### [[Ingress] [PathType] mix Exact and Prefix paths](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype_mixed.go#L30)
- [should choose the correct location](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype_mixed.go#L39)
### [[Ingress] [PathType] prefix checks](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype_prefix.go#L28)
- [should return 404 when prefix /aaa does not match request /aaaccc](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype_prefix.go#L35)
- [should test prefix path using simple regex pattern for /id/{int}](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype_prefix.go#L72)
- [should test prefix path using regex pattern for /id/{int} ignoring non-digits characters at end of string](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype_prefix.go#L113)
- [should test prefix path using fixed path size regex pattern /id/{int}{3}](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype_prefix.go#L142)
- [should correctly route multi-segment path patterns](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype_prefix.go#L177)
### [[Ingress] definition without host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/without_host.go#L31)
- [should set ingress details variables for ingresses without a host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/without_host.go#L34)
- [should set ingress details variables for ingresses with host without IngressRuleValue, only Backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/without_host.go#L55)
### [[Memory Leak] Dynamic Certificates](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/leaks/lua_ssl.go#L35)
- [should not leak memory from ingress SSL certificates or configuration updates](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/leaks/lua_ssl.go#L42)
### [[Load Balancer] load-balance](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/configmap.go#L30)
- [should apply the configmap load-balance setting](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/configmap.go#L37)
### [[Load Balancer] EWMA](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/ewma.go#L31)
- [does not fail requests](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/ewma.go#L43)
### [[Load Balancer] round-robin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/round_robin.go#L31)
- [should evenly distribute requests with round-robin (default algorithm)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/round_robin.go#L39)
### [[Lua] dynamic certificates](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_certificates.go#L37)
- [picks up the certificate when we add TLS spec to existing ingress](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_certificates.go#L45)
- [picks up the previously missing secret for a given ingress without reloading](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_certificates.go#L70)
- [supports requests with domain with trailing dot](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_certificates.go#L145)
- [picks up the updated certificate without reloading](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_certificates.go#L149)
- [falls back to using default certificate when secret gets deleted without reloading](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_certificates.go#L185)
- [picks up a non-certificate only change](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_certificates.go#L218)
- [removes HTTPS configuration when we delete TLS spec](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_certificates.go#L233)
### [[Lua] dynamic configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_configuration.go#L41)
- [configures balancer Lua middleware correctly](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_configuration.go#L49)
- [handles endpoints only changes](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_configuration.go#L56)
- [handles endpoints only changes (down scaling of replicas)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_configuration.go#L81)
- [handles endpoints only changes consistently (down scaling of replicas vs. empty service)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_configuration.go#L119)
- [handles an annotation change](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic_configuration.go#L165)
### [[metrics] exported prometheus metrics](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L36)
- [exclude socket request metrics are absent](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L51)
- [exclude socket request metrics are present](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L73)
- [request metrics per undefined host are present when flag is set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L95)
- [request metrics per undefined host are not present when flag is not set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L128)
### [nginx-configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L99)
- [start nginx with default configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L102)
- [fails when using alias directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L114)
- [fails when using root directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L121)
### [[Security] request smuggling](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/security/request_smuggling.go#L32)
- [should not return body content from error_page](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/security/request_smuggling.go#L39)
### [[Service] backend status code 503](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_backend.go#L34)
- [should return 503 when backend service does not exist](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_backend.go#L37)
- [should return 503 when all backend service endpoints are unavailable](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_backend.go#L55)
### [[Service] Type ExternalName](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_externalname.go#L38)
- [works with external name set to incomplete fqdn](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_externalname.go#L41)
- [should return 200 for service type=ExternalName without a port defined](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_externalname.go#L78)
- [should return 200 for service type=ExternalName with a port defined](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_externalname.go#L118)
- [should return status 502 for service type=ExternalName with an invalid host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_externalname.go#L148)
- [should return 200 for service type=ExternalName using a port name](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_externalname.go#L184)
- [should return 200 for service type=ExternalName using FQDN with trailing dot](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_externalname.go#L225)
- [should update the external name after a service update](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_externalname.go#L261)
- [should sync ingress on external name service addition/deletion](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_externalname.go#L344)
### [[Service] Nil Service Backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_nil_backend.go#L31)
- [should return 404 when backend service is nil](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service_nil_backend.go#L38)
### [access-log](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access_log.go#L27)
- [use the default configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access_log.go#L31)
- [use the specified configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access_log.go#L41)
- [use the specified configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access_log.go#L52)
- [use the specified configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access_log.go#L64)
- [use the specified configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access_log.go#L76)
### [aio-write](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/aio_write.go#L27)
- [should be enabled by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/aio_write.go#L30)
- [should be enabled when setting is true](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/aio_write.go#L37)
- [should be disabled when setting is false](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/aio_write.go#L46)
### [Bad annotation values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/badannotationvalues.go#L29)
- [[BAD_ANNOTATIONS] should drop an ingress if there is an invalid character in some annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/badannotationvalues.go#L36)
- [[BAD_ANNOTATIONS] should drop an ingress if there is a forbidden word in some annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/badannotationvalues.go#L68)
- [[BAD_ANNOTATIONS] should allow an ingress if there is a default blocklist config in place](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/badannotationvalues.go#L105)
- [[BAD_ANNOTATIONS] should drop an ingress if there is a custom blocklist config in place and allow others to pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/badannotationvalues.go#L138)
### [brotli](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/brotli.go#L30)
- [should only compress responses that meet the `brotli-min-length` condition](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/brotli.go#L38)
### [Configmap change](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/configmap_change.go#L29)
- [should reload after an update in the configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/configmap_change.go#L36)
### [add-headers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/custom_header.go#L30)
- [Add a custom header](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/custom_header.go#L40)
- [Add multiple custom headers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/custom_header.go#L65)
### [[SSL] [Flag] default-ssl-certificate](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/default_ssl_certificate.go#L35)
- [uses default ssl certificate for catch-all ingress](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/default_ssl_certificate.go#L66)
- [uses default ssl certificate for host based ingress when configured certificate does not match host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/default_ssl_certificate.go#L82)
### [[Flag] disable-catch-all](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable_catch_all.go#L33)
- [should ignore catch all Ingress with backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable_catch_all.go#L50)
- [should ignore catch all Ingress with backend and rules](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable_catch_all.go#L69)
- [should delete Ingress updated to catch-all](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable_catch_all.go#L81)
- [should allow Ingress with rules](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable_catch_all.go#L123)
### [[Flag] disable-service-external-name](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable_service_external_name.go#L35)
- [should ignore services of external-name type](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable_service_external_name.go#L55)
### [[Flag] disable-sync-events](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable_sync_events.go#L32)
- [should create sync events (default)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable_sync_events.go#L35)
- [should create sync events](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable_sync_events.go#L55)
- [should not create sync events](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable_sync_events.go#L83)
### [enable-real-ip](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/enable_real_ip.go#L30)
- [trusts X-Forwarded-For header only when setting is true](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/enable_real_ip.go#L40)
- [should not trust X-Forwarded-For header when setting is false](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/enable_real_ip.go#L79)
### [use-forwarded-headers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/forwarded_headers.go#L31)
- [should trust X-Forwarded headers when setting is true](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/forwarded_headers.go#L41)
- [should not trust X-Forwarded headers when setting is false](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/forwarded_headers.go#L93)
### [Geoip2](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/geoip2.go#L36)
- [should include geoip2 line in config when enabled and db file exists](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/geoip2.go#L45)
- [should only allow requests from specific countries](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/geoip2.go#L69)
- [should up and running nginx controller using autoreload flag](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/geoip2.go#L122)
### [[Security] block-*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_access_block.go#L28)
- [should block CIDRs defined in the ConfigMap](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_access_block.go#L38)
- [should block User-Agents defined in the ConfigMap](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_access_block.go#L55)
- [should block Referers defined in the ConfigMap](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_access_block.go#L88)
### [[Security] global-auth-url](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_external_auth.go#L39)
- [should return status code 401 when request any protected service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_external_auth.go#L91)
- [should return status code 200 when request whitelisted (via no-auth-locations) service and 401 when request protected service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_external_auth.go#L107)
- [should return status code 200 when request whitelisted (via ingress annotation) service and 401 when request protected service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_external_auth.go#L130)
- [should still return status code 200 after auth backend is deleted using cache](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_external_auth.go#L158)
- [user retains cookie by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_external_auth.go#L322)
- [user does not retain cookie if upstream returns error status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_external_auth.go#L333)
- [user with global-auth-always-set-cookie key in configmap retains cookie if upstream returns error status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_external_auth.go#L344)
### [global-options](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_options.go#L28)
- [should have worker_rlimit_nofile option](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_options.go#L31)
- [should have worker_rlimit_nofile option and be independent on amount of worker processes](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global_options.go#L37)
### [GRPC](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/grpc.go#L39)
- [should set the correct GRPC Buffer Size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/grpc.go#L42)
### [gzip](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L30)
- [should be disabled by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L40)
- [should be enabled with default settings](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L56)
- [should set gzip_comp_level to 4](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L82)
- [should set gzip_disable to msie6](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L102)
- [should set gzip_min_length to 100](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L132)
- [should set gzip_types to text/html](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L164)
### [hash size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L27)
- [should set server_names_hash_bucket_size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L39)
- [should set server_names_hash_max_size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L47)
- [should set proxy-headers-hash-bucket-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L57)
- [should set proxy-headers-hash-max-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L65)
- [should set variables-hash-bucket-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L75)
- [should set variables-hash-max-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L83)
- [should set vmap-hash-bucket-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L93)
### [[Flag] ingress-class](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L41)
- [should ignore Ingress with a different class annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L70)
- [should ignore Ingress with different controller class](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L106)
- [should accept both Ingresses with default IngressClassName and IngressClass annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L134)
- [should ignore Ingress without IngressClass configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L166)
- [should delete Ingress when class is removed](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L194)
- [should serve Ingress when class is added](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L259)
- [should serve Ingress when class is updated between annotation and ingressClassName](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L325)
- [should ignore Ingress with no class and accept the correctly configured Ingresses](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L414)
- [should watch Ingress with no class and ignore ingress with a different class](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L482)
- [should watch Ingress that uses the class name even if spec is different](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L538)
- [should watch Ingress with correct annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L628)
- [should ignore Ingress with only IngressClassName](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress_class.go#L648)
### [keep-alive keep-alive-requests](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L28)
- [should set keepalive_timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L40)
- [should set keepalive_requests](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L48)
- [should set keepalive connection to upstream server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L58)
- [should set keep alive connection timeout to upstream server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L68)
- [should set keepalive time to upstream server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L78)
- [should set the request count to upstream server through one keep alive connection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L88)
### [Configmap - limit-rate](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/limit_rate.go#L28)
- [Check limit-rate config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/limit_rate.go#L36)
### [[Flag] custom HTTP and HTTPS ports](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/listen_nondefault_ports.go#L30)
- [should set X-Forwarded-Port headers accordingly when listening on a non-default HTTP port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/listen_nondefault_ports.go#L45)
- [should set X-Forwarded-Port header to 443](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/listen_nondefault_ports.go#L65)
- [should set the X-Forwarded-Port header to 443](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/listen_nondefault_ports.go#L93)
### [log-format-*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L28)
- [should not configure log-format escape by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L39)
- [should enable the log-format-escape-json](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L46)
- [should disable the log-format-escape-json](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L54)
- [should enable the log-format-escape-none](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L62)
- [should disable the log-format-escape-none](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L70)
- [log-format-escape-json enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L80)
- [log-format default escape](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L103)
- [log-format-escape-none enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L126)
### [[Lua] lua-shared-dicts](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/lua_shared_dicts.go#L26)
- [configures lua shared dicts](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/lua_shared_dicts.go#L29)
### [main-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/main_snippet.go#L27)
- [should add value of main-snippet setting to nginx config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/main_snippet.go#L31)
### [[Security] modsecurity-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/modsecurity/modsecurity_snippet.go#L27)
- [should add value of modsecurity-snippet setting to nginx config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/modsecurity/modsecurity_snippet.go#L30)
### [enable-multi-accept](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/multi_accept.go#L27)
- [should be enabled by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/multi_accept.go#L31)
- [should be enabled when set to true](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/multi_accept.go#L39)
- [should be disabled when set to false](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/multi_accept.go#L49)
### [[Flag] watch namespace selector](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/namespace_selector.go#L30)
- [should ignore Ingress of namespace without label foo=bar and accept those of namespace with label foo=bar](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/namespace_selector.go#L62)
### [[Security] no-auth-locations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no_auth_locations.go#L33)
- [should return status code 401 when accessing '/' unauthentication](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no_auth_locations.go#L54)
- [should return status code 200 when accessing '/' authentication](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no_auth_locations.go#L68)
- [should return status code 200 when accessing '/noauth' unauthenticated](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no_auth_locations.go#L82)
### [Add no tls redirect locations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no_tls_redirect_locations.go#L27)
- [Check no tls redirect locations config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no_tls_redirect_locations.go#L30)
### [OCSP](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ocsp/ocsp.go#L43)
- [should enable OCSP and contain stapling information in the connection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ocsp/ocsp.go#L50)
### [Configure Opentelemetry](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L39)
- [should not exists opentelemetry directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L49)
- [should exists opentelemetry directive when is enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L62)
- [should include opentelemetry_trust_incoming_spans on directive when enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L76)
- [should not exists opentelemetry_operation_name directive when is empty](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L91)
- [should exists opentelemetry_operation_name directive when is configured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L106)
### [proxy-connect-timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_connect_timeout.go#L29)
- [should set valid proxy timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_connect_timeout.go#L37)
- [should not set invalid proxy timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_connect_timeout.go#L53)
### [Dynamic $proxy_host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_host.go#L28)
- [should exist a proxy_host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_host.go#L36)
- [should exist a proxy_host using the upstream-vhost annotation value](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_host.go#L60)
### [proxy-next-upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_next_upstream.go#L28)
- [should build proxy next upstream using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_next_upstream.go#L36)
### [use-proxy-protocol](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_protocol.go#L38)
- [should respect port passed by the PROXY Protocol](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_protocol.go#L48)
- [should respect proto passed by the PROXY Protocol server port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_protocol.go#L85)
- [should enable PROXY Protocol for HTTPS](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_protocol.go#L121)
- [should enable PROXY Protocol for TCP](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_protocol.go#L164)
### [proxy-read-timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_read_timeout.go#L29)
- [should set valid proxy read timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_read_timeout.go#L37)
- [should not set invalid proxy read timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_read_timeout.go#L53)
### [proxy-send-timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_send_timeout.go#L29)
- [should set valid proxy send timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_send_timeout.go#L37)
- [should not set invalid proxy send timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy_send_timeout.go#L53)
### [reuse-port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/reuse-port.go#L27)
- [reuse port should be enabled by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/reuse-port.go#L38)
- [reuse port should be disabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/reuse-port.go#L44)
- [reuse port should be enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/reuse-port.go#L52)
### [configmap server-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server_snippet.go#L28)
- [should add value of server-snippet setting to all ingress config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server_snippet.go#L35)
- [should add global server-snippet and drop annotations per admin config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server_snippet.go#L100)
### [server-tokens](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server_tokens.go#L29)
- [should not exists Server header in the response](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server_tokens.go#L38)
- [should exists Server header in the response when is enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server_tokens.go#L50)
### [ssl-ciphers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl_ciphers.go#L28)
- [Add ssl ciphers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl_ciphers.go#L31)
### [[Flag] enable-ssl-passthrough](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl_passthrough.go#L36)
### [With enable-ssl-passthrough enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl_passthrough.go#L55)
- [should enable ssl-passthrough-proxy-port on a different port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl_passthrough.go#L56)
- [should pass unknown traffic to default backend and handle known traffic](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl_passthrough.go#L78)
### [configmap stream-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/stream_snippet.go#L35)
- [should add value of stream-snippet via config map to nginx config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/stream_snippet.go#L42)
### [[SSL] TLS protocols, ciphers and headers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L32)
- [setting cipher suite](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L66)
- [setting max-age parameter](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L110)
- [setting includeSubDomains parameter](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L127)
- [setting preload parameter](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L147)
- [overriding what's set from the upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L168)
- [should not use ports during the HTTP to HTTPS redirection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L190)
- [should not use ports or X-Forwarded-Host during the HTTP to HTTPS redirection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L208)
### [annotation validations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/validations/validations.go#L30)
- [should allow ingress based on their risk on webhooks](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/validations/validations.go#L33)
- [should allow ingress based on their risk on webhooks](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/validations/validations.go#L68)
### [[SSL] redirect to HTTPS](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ssl/http_redirect.go#L29)
- [should redirect from HTTP to HTTPS when secret is missing](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ssl/http_redirect.go#L36)
### [[SSL] secret update](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ssl/secret_update.go#L33)
- [should not appear references to secret updates not used in ingress rules](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ssl/secret_update.go#L40)
- [should return the fake SSL certificate if the secret is invalid](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ssl/secret_update.go#L83)
### [[Status] status update](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/status/update.go#L38)
- [should update status field after client-go reconnection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/status/update.go#L43)
### [[TCP] tcp-services](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/tcpudp/tcp.go#L38)
- [should expose a TCP service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/tcpudp/tcp.go#L46)
- [should expose an ExternalName TCP service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/tcpudp/tcp.go#L80)
- [should reload after an update in the configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/tcpudp/tcp.go#L169 | ingress nginx | This file is autogenerated Do not try to edit it manually e2e test suite for Ingress NGINX Controller https github com kubernetes ingress nginx tree main Admission admission controller https github com kubernetes ingress nginx tree main test e2e admission admission go L39 should not allow overlaps of host and paths without canary annotations https github com kubernetes ingress nginx tree main test e2e admission admission go L47 should allow overlaps of host and paths with canary annotation https github com kubernetes ingress nginx tree main test e2e admission admission go L64 should block ingress with invalid path https github com kubernetes ingress nginx tree main test e2e admission admission go L85 should return an error if there is an error validating the ingress definition https github com kubernetes ingress nginx tree main test e2e admission admission go L102 should return an error if there is an invalid value in some annotation https github com kubernetes ingress nginx tree main test e2e admission admission go L116 should return an error if there is a forbidden value in some annotation https github com kubernetes ingress nginx tree main test e2e admission admission go L130 should return an error if there is an invalid path and wrong pathType is set https github com kubernetes ingress nginx tree main test e2e admission admission go L144 should not return an error if the Ingress V1 definition is valid with Ingress Class https github com kubernetes ingress nginx tree main test e2e admission admission go L178 should not return an error if the Ingress V1 definition is valid with IngressClass annotation https github com kubernetes ingress nginx tree main test e2e admission admission go L194 should return an error if the Ingress V1 definition contains invalid annotations https github com kubernetes ingress nginx tree main test e2e admission admission go L210 should not return an error for an invalid Ingress when it has unknown class https github com kubernetes ingress nginx tree main test e2e admission admission go L224 affinity session cookie name https github com kubernetes ingress nginx tree main test e2e annotations affinity go L43 should set sticky cookie SERVERID https github com kubernetes ingress nginx tree main test e2e annotations affinity go L50 should change cookie name on ingress definition change https github com kubernetes ingress nginx tree main test e2e annotations affinity go L72 should set the path to something on the generated cookie https github com kubernetes ingress nginx tree main test e2e annotations affinity go L107 does not set the path to on the generated cookie if there s more than one rule referring to the same backend https github com kubernetes ingress nginx tree main test e2e annotations affinity go L129 should set cookie with expires https github com kubernetes ingress nginx tree main test e2e annotations affinity go L202 should set cookie with domain https github com kubernetes ingress nginx tree main test e2e annotations affinity go L234 should not set cookie without domain annotation https github com kubernetes ingress nginx tree main test e2e annotations affinity go L257 should work with use regex annotation and session cookie path https github com kubernetes ingress nginx tree main test e2e annotations affinity go L279 should warn user when use regex is true and session cookie path is not set https github com kubernetes ingress nginx tree main test e2e annotations affinity go L303 should not set affinity across all server locations when using separate ingresses https github com kubernetes ingress nginx tree main test e2e annotations affinity go L329 should set sticky cookie without host https github com kubernetes ingress nginx tree main test e2e annotations affinity go L361 should work with server alias annotation https github com kubernetes ingress nginx tree main test e2e annotations affinity go L381 should set secure in cookie with provided true annotation on http https github com kubernetes ingress nginx tree main test e2e annotations affinity go L421 should not set secure in cookie with provided false annotation on http https github com kubernetes ingress nginx tree main test e2e annotations affinity go L444 should set secure in cookie with provided false annotation on https https github com kubernetes ingress nginx tree main test e2e annotations affinity go L467 affinitymode https github com kubernetes ingress nginx tree main test e2e annotations affinitymode go L33 Balanced affinity mode should balance https github com kubernetes ingress nginx tree main test e2e annotations affinitymode go L36 Check persistent affinity mode https github com kubernetes ingress nginx tree main test e2e annotations affinitymode go L69 server alias https github com kubernetes ingress nginx tree main test e2e annotations alias go L31 should return status code 200 for host foo and 404 for bar https github com kubernetes ingress nginx tree main test e2e annotations alias go L38 should return status code 200 for host foo and bar https github com kubernetes ingress nginx tree main test e2e annotations alias go L64 should return status code 200 for hosts defined in two ingresses different path with one alias https github com kubernetes ingress nginx tree main test e2e annotations alias go L89 app root https github com kubernetes ingress nginx tree main test e2e annotations approot go L28 should redirect to foo https github com kubernetes ingress nginx tree main test e2e annotations approot go L35 auth https github com kubernetes ingress nginx tree main test e2e annotations auth go L45 should return status code 200 when no authentication is configured https github com kubernetes ingress nginx tree main test e2e annotations auth go L52 should return status code 503 when authentication is configured with an invalid secret https github com kubernetes ingress nginx tree main test e2e annotations auth go L71 should return status code 401 when authentication is configured but Authorization header is not configured https github com kubernetes ingress nginx tree main test e2e annotations auth go L95 should return status code 401 when authentication is configured and Authorization header is sent with invalid credentials https github com kubernetes ingress nginx tree main test e2e annotations auth go L122 should return status code 401 and cors headers when authentication and cors is configured but Authorization header is not configured https github com kubernetes ingress nginx tree main test e2e annotations auth go L150 should return status code 200 when authentication is configured and Authorization header is sent https github com kubernetes ingress nginx tree main test e2e annotations auth go L178 should return status code 200 when authentication is configured with a map and Authorization header is sent https github com kubernetes ingress nginx tree main test e2e annotations auth go L205 should return status code 401 when authentication is configured with invalid content and Authorization header is sent https github com kubernetes ingress nginx tree main test e2e annotations auth go L233 proxy set header My Custom Header 42 https github com kubernetes ingress nginx tree main test e2e annotations auth go L272 proxy set header My Custom Header 42 https github com kubernetes ingress nginx tree main test e2e annotations auth go L292 proxy set header My Custom Header 42 https github com kubernetes ingress nginx tree main test e2e annotations auth go L311 user retains cookie by default https github com kubernetes ingress nginx tree main test e2e annotations auth go L420 user does not retain cookie if upstream returns error status code https github com kubernetes ingress nginx tree main test e2e annotations auth go L431 user with annotated ingress retains cookie if upstream returns error status code https github com kubernetes ingress nginx tree main test e2e annotations auth go L442 should return status code 200 when signed in https github com kubernetes ingress nginx tree main test e2e annotations auth go L481 should redirect to signin url when not signed in https github com kubernetes ingress nginx tree main test e2e annotations auth go L490 keeps processing new ingresses even if one of the existing ingresses is misconfigured https github com kubernetes ingress nginx tree main test e2e annotations auth go L501 should overwrite Foo header with auth response https github com kubernetes ingress nginx tree main test e2e annotations auth go L525 should return status code 200 when signed in https github com kubernetes ingress nginx tree main test e2e annotations auth go L701 should redirect to signin url when not signed in https github com kubernetes ingress nginx tree main test e2e annotations auth go L710 keeps processing new ingresses even if one of the existing ingresses is misconfigured https github com kubernetes ingress nginx tree main test e2e annotations auth go L721 should return status code 200 when signed in after auth backend is deleted https github com kubernetes ingress nginx tree main test e2e annotations auth go L780 should deny login for different location on same server https github com kubernetes ingress nginx tree main test e2e annotations auth go L800 should deny login for different servers https github com kubernetes ingress nginx tree main test e2e annotations auth go L828 should redirect to signin url when not signed in https github com kubernetes ingress nginx tree main test e2e annotations auth go L857 should return 503 location was denied https github com kubernetes ingress nginx tree main test e2e annotations auth go L887 should add error to the config https github com kubernetes ingress nginx tree main test e2e annotations auth go L895 auth tls https github com kubernetes ingress nginx tree main test e2e annotations authtls go L31 should set sslClientCertificate sslVerifyClient and sslVerifyDepth with auth tls secret https github com kubernetes ingress nginx tree main test e2e annotations authtls go L38 should set valid auth tls secret sslVerify to off and sslVerifyDepth to 2 https github com kubernetes ingress nginx tree main test e2e annotations authtls go L86 should 302 redirect to error page instead of 400 when auth tls error page is set https github com kubernetes ingress nginx tree main test e2e annotations authtls go L116 should pass URL encoded certificate to upstream https github com kubernetes ingress nginx tree main test e2e annotations authtls go L163 should validate auth tls verify client https github com kubernetes ingress nginx tree main test e2e annotations authtls go L208 should return 403 using auth tls match cn with no matching CN from client https github com kubernetes ingress nginx tree main test e2e annotations authtls go L267 should return 200 using auth tls match cn with matching CN from client https github com kubernetes ingress nginx tree main test e2e annotations authtls go L296 should reload the nginx config when auth tls match cn is updated https github com kubernetes ingress nginx tree main test e2e annotations authtls go L325 should return 200 using auth tls match cn where atleast one of the regex options matches CN from client https github com kubernetes ingress nginx tree main test e2e annotations authtls go L368 backend protocol https github com kubernetes ingress nginx tree main test e2e annotations backendprotocol go L29 should set backend protocol to https and use proxy pass https github com kubernetes ingress nginx tree main test e2e annotations backendprotocol go L36 should set backend protocol to https and use proxy pass with lowercase annotation https github com kubernetes ingress nginx tree main test e2e annotations backendprotocol go L51 should set backend protocol to scheme and use proxy pass https github com kubernetes ingress nginx tree main test e2e annotations backendprotocol go L66 should set backend protocol to grpc and use grpc pass https github com kubernetes ingress nginx tree main test e2e annotations backendprotocol go L81 should set backend protocol to grpcs and use grpc pass https github com kubernetes ingress nginx tree main test e2e annotations backendprotocol go L96 should set backend protocol to and use fastcgi pass https github com kubernetes ingress nginx tree main test e2e annotations backendprotocol go L111 canary https github com kubernetes ingress nginx tree main test e2e annotations canary go L36 should response with a 200 status from the mainline upstream when requests are made to the mainline ingress https github com kubernetes ingress nginx tree main test e2e annotations canary go L45 should return 404 status for requests to the canary if no matching ingress is found https github com kubernetes ingress nginx tree main test e2e annotations canary go L89 should return the correct status codes when endpoints are unavailable https github com kubernetes ingress nginx tree main test e2e annotations canary go L120 should route requests to the correct upstream if mainline ingress is created before the canary ingress https github com kubernetes ingress nginx tree main test e2e annotations canary go L174 should route requests to the correct upstream if mainline ingress is created after the canary ingress https github com kubernetes ingress nginx tree main test e2e annotations canary go L232 should route requests to the correct upstream if the mainline ingress is modified https github com kubernetes ingress nginx tree main test e2e annotations canary go L289 should route requests to the correct upstream if the canary ingress is modified https github com kubernetes ingress nginx tree main test e2e annotations canary go L363 should route requests to the correct upstream https github com kubernetes ingress nginx tree main test e2e annotations canary go L445 should route requests to the correct upstream https github com kubernetes ingress nginx tree main test e2e annotations canary go L513 should route requests to the correct upstream https github com kubernetes ingress nginx tree main test e2e annotations canary go L594 should route requests to the correct upstream https github com kubernetes ingress nginx tree main test e2e annotations canary go L647 should routes to mainline upstream when the given Regex causes error https github com kubernetes ingress nginx tree main test e2e annotations canary go L692 should route requests to the correct upstream https github com kubernetes ingress nginx tree main test e2e annotations canary go L741 respects always and never values https github com kubernetes ingress nginx tree main test e2e annotations canary go L790 should route requests only to mainline if canary weight is 0 https github com kubernetes ingress nginx tree main test e2e annotations canary go L862 should route requests only to canary if canary weight is 100 https github com kubernetes ingress nginx tree main test e2e annotations canary go L910 should route requests only to canary if canary weight is equal to canary weight total https github com kubernetes ingress nginx tree main test e2e annotations canary go L952 should route requests split between mainline and canary if canary weight is 50 https github com kubernetes ingress nginx tree main test e2e annotations canary go L995 should route requests split between mainline and canary if canary weight is 100 and weight total is 200 https github com kubernetes ingress nginx tree main test e2e annotations canary go L1031 should not use canary as a catch all server https github com kubernetes ingress nginx tree main test e2e annotations canary go L1070 should not use canary with domain as a server https github com kubernetes ingress nginx tree main test e2e annotations canary go L1104 does not crash when canary ingress has multiple paths to the same non matching backend https github com kubernetes ingress nginx tree main test e2e annotations canary go L1138 always routes traffic to canary if first request was affinitized to canary default behavior https github com kubernetes ingress nginx tree main test e2e annotations canary go L1175 always routes traffic to canary if first request was affinitized to canary explicit sticky behavior https github com kubernetes ingress nginx tree main test e2e annotations canary go L1242 routes traffic to either mainline or canary backend legacy behavior https github com kubernetes ingress nginx tree main test e2e annotations canary go L1310 client body buffer size https github com kubernetes ingress nginx tree main test e2e annotations clientbodybuffersize go L30 should set client body buffer size to 1000 https github com kubernetes ingress nginx tree main test e2e annotations clientbodybuffersize go L37 should set client body buffer size to 1K https github com kubernetes ingress nginx tree main test e2e annotations clientbodybuffersize go L59 should set client body buffer size to 1k https github com kubernetes ingress nginx tree main test e2e annotations clientbodybuffersize go L81 should set client body buffer size to 1m https github com kubernetes ingress nginx tree main test e2e annotations clientbodybuffersize go L103 should set client body buffer size to 1M https github com kubernetes ingress nginx tree main test e2e annotations clientbodybuffersize go L125 should not set client body buffer size to invalid 1b https github com kubernetes ingress nginx tree main test e2e annotations clientbodybuffersize go L147 connection proxy header https github com kubernetes ingress nginx tree main test e2e annotations connection go L28 set connection header to keep alive https github com kubernetes ingress nginx tree main test e2e annotations connection go L35 cors https github com kubernetes ingress nginx tree main test e2e annotations cors go L33 should enable cors https github com kubernetes ingress nginx tree main test e2e annotations cors go L40 should set cors methods to only allow POST GET https github com kubernetes ingress nginx tree main test e2e annotations cors go L67 should set cors max age https github com kubernetes ingress nginx tree main test e2e annotations cors go L83 should disable cors allow credentials https github com kubernetes ingress nginx tree main test e2e annotations cors go L99 should allow origin for cors https github com kubernetes ingress nginx tree main test e2e annotations cors go L115 should allow headers for cors https github com kubernetes ingress nginx tree main test e2e annotations cors go L142 should expose headers for cors https github com kubernetes ingress nginx tree main test e2e annotations cors go L158 should allow single origin for multiple cors values https github com kubernetes ingress nginx tree main test e2e annotations cors go L174 should not allow single origin for multiple cors values https github com kubernetes ingress nginx tree main test e2e annotations cors go L201 should allow correct origins single origin for multiple cors values https github com kubernetes ingress nginx tree main test e2e annotations cors go L221 should not break functionality https github com kubernetes ingress nginx tree main test e2e annotations cors go L272 should not break functionality without https github com kubernetes ingress nginx tree main test e2e annotations cors go L296 should not break functionality with extra domain https github com kubernetes ingress nginx tree main test e2e annotations cors go L319 should not match https github com kubernetes ingress nginx tree main test e2e annotations cors go L343 should allow single origin with required port https github com kubernetes ingress nginx tree main test e2e annotations cors go L363 should not allow single origin with port and origin without port https github com kubernetes ingress nginx tree main test e2e annotations cors go L391 should not allow single origin without port and origin with required port https github com kubernetes ingress nginx tree main test e2e annotations cors go L410 should allow matching origin with wildcard origin 2 subdomains https github com kubernetes ingress nginx tree main test e2e annotations cors go L430 should not allow unmatching origin with wildcard origin 2 subdomains https github com kubernetes ingress nginx tree main test e2e annotations cors go L473 should allow matching origin port with wildcard origin https github com kubernetes ingress nginx tree main test e2e annotations cors go L493 should not allow portless origin with wildcard origin https github com kubernetes ingress nginx tree main test e2e annotations cors go L520 should allow correct origins missing subdomain origin with wildcard origin and correct origin https github com kubernetes ingress nginx tree main test e2e annotations cors go L540 should allow missing origins should allow all origins https github com kubernetes ingress nginx tree main test e2e annotations cors go L576 should allow correct origin but not others cors allow origin annotations contain trailing comma https github com kubernetes ingress nginx tree main test e2e annotations cors go L636 should allow origins with non http s protocols https github com kubernetes ingress nginx tree main test e2e annotations cors go L673 custom headers https github com kubernetes ingress nginx tree main test e2e annotations customheaders go L33 should return status code 200 when no custom headers is configured https github com kubernetes ingress nginx tree main test e2e annotations customheaders go L40 should return status code 503 when custom headers is configured with an invalid secret https github com kubernetes ingress nginx tree main test e2e annotations customheaders go L57 more set headers My Custom Header 42 https github com kubernetes ingress nginx tree main test e2e annotations customheaders go L78 custom http errors https github com kubernetes ingress nginx tree main test e2e annotations customhttperrors go L34 configures Nginx correctly https github com kubernetes ingress nginx tree main test e2e annotations customhttperrors go L41 default backend https github com kubernetes ingress nginx tree main test e2e annotations default backend go L29 should use a custom default backend as upstream https github com kubernetes ingress nginx tree main test e2e annotations default backend go L37 disable access log disable http access log disable stream access log https github com kubernetes ingress nginx tree main test e2e annotations disableaccesslog go L28 disable access log set access log off https github com kubernetes ingress nginx tree main test e2e annotations disableaccesslog go L35 disable http access log set access log off https github com kubernetes ingress nginx tree main test e2e annotations disableaccesslog go L53 disable stream access log set access log off https github com kubernetes ingress nginx tree main test e2e annotations disableaccesslog go L71 disable proxy intercept errors https github com kubernetes ingress nginx tree main test e2e annotations disableproxyintercepterrors go L31 configures Nginx correctly https github com kubernetes ingress nginx tree main test e2e annotations disableproxyintercepterrors go L39 backend protocol FastCGI https github com kubernetes ingress nginx tree main test e2e annotations fastcgi go L30 should use fastcgi pass in the configuration file https github com kubernetes ingress nginx tree main test e2e annotations fastcgi go L37 should add fastcgi index in the configuration file https github com kubernetes ingress nginx tree main test e2e annotations fastcgi go L54 should add fastcgi param in the configuration file https github com kubernetes ingress nginx tree main test e2e annotations fastcgi go L71 should return OK for service with backend protocol FastCGI https github com kubernetes ingress nginx tree main test e2e annotations fastcgi go L102 force ssl redirect https github com kubernetes ingress nginx tree main test e2e annotations forcesslredirect go L27 should redirect to https https github com kubernetes ingress nginx tree main test e2e annotations forcesslredirect go L34 from to www redirect https github com kubernetes ingress nginx tree main test e2e annotations fromtowwwredirect go L31 should redirect from www HTTP to HTTP https github com kubernetes ingress nginx tree main test e2e annotations fromtowwwredirect go L38 should redirect from www HTTPS to HTTPS https github com kubernetes ingress nginx tree main test e2e annotations fromtowwwredirect go L64 backend protocol GRPC https github com kubernetes ingress nginx tree main test e2e annotations grpc go L45 should use grpc pass in the configuration file https github com kubernetes ingress nginx tree main test e2e annotations grpc go L48 should return OK for service with backend protocol GRPC https github com kubernetes ingress nginx tree main test e2e annotations grpc go L71 authorization metadata should be overwritten by external auth response headers https github com kubernetes ingress nginx tree main test e2e annotations grpc go L132 should return OK for service with backend protocol GRPCS https github com kubernetes ingress nginx tree main test e2e annotations grpc go L193 should return OK when request not exceed timeout https github com kubernetes ingress nginx tree main test e2e annotations grpc go L260 should return Error when request exceed timeout https github com kubernetes ingress nginx tree main test e2e annotations grpc go L303 http2 push preload https github com kubernetes ingress nginx tree main test e2e annotations http2pushpreload go L27 enable the http2 push preload directive https github com kubernetes ingress nginx tree main test e2e annotations http2pushpreload go L34 allowlist source range https github com kubernetes ingress nginx tree main test e2e annotations ipallowlist go L27 should set valid ip allowlist range https github com kubernetes ingress nginx tree main test e2e annotations ipallowlist go L34 denylist source range https github com kubernetes ingress nginx tree main test e2e annotations ipdenylist go L28 only deny explicitly denied IPs allow all others https github com kubernetes ingress nginx tree main test e2e annotations ipdenylist go L35 only allow explicitly allowed IPs deny all others https github com kubernetes ingress nginx tree main test e2e annotations ipdenylist go L86 Annotation limit connections https github com kubernetes ingress nginx tree main test e2e annotations limitconnections go L31 should limit connections https github com kubernetes ingress nginx tree main test e2e annotations limitconnections go L38 limit rate https github com kubernetes ingress nginx tree main test e2e annotations limitrate go L29 Check limit rate annotation https github com kubernetes ingress nginx tree main test e2e annotations limitrate go L37 enable access log enable rewrite log https github com kubernetes ingress nginx tree main test e2e annotations log go L27 set access log off https github com kubernetes ingress nginx tree main test e2e annotations log go L34 set rewrite log on https github com kubernetes ingress nginx tree main test e2e annotations log go L49 mirror https github com kubernetes ingress nginx tree main test e2e annotations mirror go L28 should set mirror target to http localhost mirror https github com kubernetes ingress nginx tree main test e2e annotations mirror go L36 should set mirror target to https test env com request uri https github com kubernetes ingress nginx tree main test e2e annotations mirror go L51 should disable mirror request body https github com kubernetes ingress nginx tree main test e2e annotations mirror go L67 modsecurity owasp https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L39 should enable modsecurity https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L46 should enable modsecurity with transaction ID and OWASP rules https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L64 should disable modsecurity https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L85 should enable modsecurity with snippet https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L102 should enable modsecurity without using modsecurity on https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L124 should disable modsecurity using modsecurity off https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L147 should enable modsecurity with snippet and block requests https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L169 should enable modsecurity globally and with modsecurity snippet block requests https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L202 should enable modsecurity when enable owasp modsecurity crs is set to true https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L235 should enable modsecurity through the config map https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L269 should enable modsecurity through the config map but ignore snippet as disabled by admin https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L309 should disable default modsecurity conf setting when modsecurity snippet is specified https github com kubernetes ingress nginx tree main test e2e annotations modsecurity modsecurity go L354 preserve trailing slash https github com kubernetes ingress nginx tree main test e2e annotations preservetrailingslash go L27 should allow preservation of trailing slashes https github com kubernetes ingress nginx tree main test e2e annotations preservetrailingslash go L34 proxy https github com kubernetes ingress nginx tree main test e2e annotations proxy go L30 should set proxy redirect to off https github com kubernetes ingress nginx tree main test e2e annotations proxy go L38 should set proxy redirect to default https github com kubernetes ingress nginx tree main test e2e annotations proxy go L54 should set proxy redirect to hello com goodbye com https github com kubernetes ingress nginx tree main test e2e annotations proxy go L70 should set proxy client max body size to 8m https github com kubernetes ingress nginx tree main test e2e annotations proxy go L87 should not set proxy client max body size to incorrect value https github com kubernetes ingress nginx tree main test e2e annotations proxy go L102 should set valid proxy timeouts https github com kubernetes ingress nginx tree main test e2e annotations proxy go L117 should not set invalid proxy timeouts https github com kubernetes ingress nginx tree main test e2e annotations proxy go L138 should turn on proxy buffering https github com kubernetes ingress nginx tree main test e2e annotations proxy go L159 should turn off proxy request buffering https github com kubernetes ingress nginx tree main test e2e annotations proxy go L181 should build proxy next upstream https github com kubernetes ingress nginx tree main test e2e annotations proxy go L196 should setup proxy cookies https github com kubernetes ingress nginx tree main test e2e annotations proxy go L217 should change the default proxy HTTP version https github com kubernetes ingress nginx tree main test e2e annotations proxy go L235 proxy ssl https github com kubernetes ingress nginx tree main test e2e annotations proxyssl go L32 should set valid proxy ssl secret https github com kubernetes ingress nginx tree main test e2e annotations proxyssl go L39 should set valid proxy ssl secret proxy ssl verify to on proxy ssl verify depth to 2 and proxy ssl server name to on https github com kubernetes ingress nginx tree main test e2e annotations proxyssl go L66 should set valid proxy ssl secret proxy ssl ciphers to HIGH AES https github com kubernetes ingress nginx tree main test e2e annotations proxyssl go L96 should set valid proxy ssl secret proxy ssl protocols https github com kubernetes ingress nginx tree main test e2e annotations proxyssl go L124 proxy ssl location only flag should change the nginx config server part https github com kubernetes ingress nginx tree main test e2e annotations proxyssl go L152 permanent redirect permanent redirect code https github com kubernetes ingress nginx tree main test e2e annotations redirect go L30 should respond with a standard redirect code https github com kubernetes ingress nginx tree main test e2e annotations redirect go L33 should respond with a custom redirect code https github com kubernetes ingress nginx tree main test e2e annotations redirect go L61 rewrite target use regex enable rewrite log https github com kubernetes ingress nginx tree main test e2e annotations rewrite go L32 should write rewrite logs https github com kubernetes ingress nginx tree main test e2e annotations rewrite go L39 should use correct longest path match https github com kubernetes ingress nginx tree main test e2e annotations rewrite go L68 should use location modifier if regex annotation is present https github com kubernetes ingress nginx tree main test e2e annotations rewrite go L113 should fail to use longest match for documented warning https github com kubernetes ingress nginx tree main test e2e annotations rewrite go L160 should allow for custom rewrite parameters https github com kubernetes ingress nginx tree main test e2e annotations rewrite go L192 satisfy https github com kubernetes ingress nginx tree main test e2e annotations satisfy go L33 should configure satisfy directive correctly https github com kubernetes ingress nginx tree main test e2e annotations satisfy go L40 should allow multiple auth with satisfy any https github com kubernetes ingress nginx tree main test e2e annotations satisfy go L82 server snippet https github com kubernetes ingress nginx tree main test e2e annotations serversnippet go L28 service upstream https github com kubernetes ingress nginx tree main test e2e annotations serviceupstream go L32 should use the Service Cluster IP and Port https github com kubernetes ingress nginx tree main test e2e annotations serviceupstream go L41 should use the Service Cluster IP and Port https github com kubernetes ingress nginx tree main test e2e annotations serviceupstream go L69 should not use the Service Cluster IP and Port https github com kubernetes ingress nginx tree main test e2e annotations serviceupstream go L97 configuration snippet https github com kubernetes ingress nginx tree main test e2e annotations snippet go L28 set snippet more set headers in all locations https github com kubernetes ingress nginx tree main test e2e annotations snippet go L34 drops snippet more set header in all locations if disabled by admin https github com kubernetes ingress nginx tree main test e2e annotations snippet go L66 ssl ciphers https github com kubernetes ingress nginx tree main test e2e annotations sslciphers go L28 should change ssl ciphers https github com kubernetes ingress nginx tree main test e2e annotations sslciphers go L35 should keep ssl ciphers https github com kubernetes ingress nginx tree main test e2e annotations sslciphers go L58 stream snippet https github com kubernetes ingress nginx tree main test e2e annotations streamsnippet go L34 should add value of stream snippet to nginx config https github com kubernetes ingress nginx tree main test e2e annotations streamsnippet go L41 should add stream snippet and drop annotations per admin config https github com kubernetes ingress nginx tree main test e2e annotations streamsnippet go L88 upstream hash by https github com kubernetes ingress nginx tree main test e2e annotations upstreamhashby go L79 should connect to the same pod https github com kubernetes ingress nginx tree main test e2e annotations upstreamhashby go L86 should connect to the same subset of pods https github com kubernetes ingress nginx tree main test e2e annotations upstreamhashby go L95 upstream vhost https github com kubernetes ingress nginx tree main test e2e annotations upstreamvhost go L27 set host to upstreamvhost bar com https github com kubernetes ingress nginx tree main test e2e annotations upstreamvhost go L34 x forwarded prefix https github com kubernetes ingress nginx tree main test e2e annotations xforwardedprefix go L28 should set the X Forwarded Prefix to the annotation value https github com kubernetes ingress nginx tree main test e2e annotations xforwardedprefix go L35 should not add X Forwarded Prefix if the annotation value is empty https github com kubernetes ingress nginx tree main test e2e annotations xforwardedprefix go L57 CGroups cgroups https github com kubernetes ingress nginx tree main test e2e cgroups cgroups go L32 detects cgroups version v1 https github com kubernetes ingress nginx tree main test e2e cgroups cgroups go L40 detect cgroups version v2 https github com kubernetes ingress nginx tree main test e2e cgroups cgroups go L83 Debug CLI https github com kubernetes ingress nginx tree main test e2e dbg main go L29 should list the backend servers https github com kubernetes ingress nginx tree main test e2e dbg main go L37 should get information for a specific backend server https github com kubernetes ingress nginx tree main test e2e dbg main go L56 should produce valid JSON for dbg general https github com kubernetes ingress nginx tree main test e2e dbg main go L85 Default Backend custom service https github com kubernetes ingress nginx tree main test e2e defaultbackend custom default backend go L33 uses custom default backend that returns 200 as status code https github com kubernetes ingress nginx tree main test e2e defaultbackend custom default backend go L36 Default Backend https github com kubernetes ingress nginx tree main test e2e defaultbackend default backend go L30 should return 404 sending requests when only a default backend is running https github com kubernetes ingress nginx tree main test e2e defaultbackend default backend go L33 enables access logging for default backend https github com kubernetes ingress nginx tree main test e2e defaultbackend default backend go L88 disables access logging for default backend https github com kubernetes ingress nginx tree main test e2e defaultbackend default backend go L105 Default Backend SSL https github com kubernetes ingress nginx tree main test e2e defaultbackend ssl go L26 should return a self generated SSL certificate https github com kubernetes ingress nginx tree main test e2e defaultbackend ssl go L29 Default Backend change default settings https github com kubernetes ingress nginx tree main test e2e defaultbackend with hosts go L30 should apply the annotation to the default backend https github com kubernetes ingress nginx tree main test e2e defaultbackend with hosts go L38 Disable Leader Routing works when leader election was disabled https github com kubernetes ingress nginx tree main test e2e disableleaderelection disable leader go L28 should create multiple ingress routings rules when leader election has disabled https github com kubernetes ingress nginx tree main test e2e disableleaderelection disable leader go L35 Endpointslices long service name https github com kubernetes ingress nginx tree main test e2e endpointslices longname go L29 should return 200 when service name has max allowed number of characters 63 https github com kubernetes ingress nginx tree main test e2e endpointslices longname go L38 TopologyHints topology aware routing https github com kubernetes ingress nginx tree main test e2e endpointslices topology go L34 should return 200 when service has topology hints https github com kubernetes ingress nginx tree main test e2e endpointslices topology go L42 Shutdown Grace period shutdown https github com kubernetes ingress nginx tree main test e2e gracefulshutdown grace period go L32 healthz should return status code 500 during shutdown grace period https github com kubernetes ingress nginx tree main test e2e gracefulshutdown grace period go L35 Shutdown ingress controller https github com kubernetes ingress nginx tree main test e2e gracefulshutdown shutdown go L30 should shutdown in less than 60 seconds without pending connections https github com kubernetes ingress nginx tree main test e2e gracefulshutdown shutdown go L40 Shutdown Graceful shutdown with pending request https github com kubernetes ingress nginx tree main test e2e gracefulshutdown slow requests go L25 should let slow requests finish before shutting down https github com kubernetes ingress nginx tree main test e2e gracefulshutdown slow requests go L33 Ingress DeepInspection https github com kubernetes ingress nginx tree main test e2e ingress deep inspection go L27 should drop whole ingress if one path matches invalid regex https github com kubernetes ingress nginx tree main test e2e ingress deep inspection go L34 single ingress multiple hosts https github com kubernetes ingress nginx tree main test e2e ingress multiple rules go L30 should set the correct service name NGINX variable https github com kubernetes ingress nginx tree main test e2e ingress multiple rules go L38 Ingress PathType exact https github com kubernetes ingress nginx tree main test e2e ingress pathtype exact go L30 should choose exact location for exact https github com kubernetes ingress nginx tree main test e2e ingress pathtype exact go L37 Ingress PathType mix Exact and Prefix paths https github com kubernetes ingress nginx tree main test e2e ingress pathtype mixed go L30 should choose the correct location https github com kubernetes ingress nginx tree main test e2e ingress pathtype mixed go L39 Ingress PathType prefix checks https github com kubernetes ingress nginx tree main test e2e ingress pathtype prefix go L28 should return 404 when prefix aaa does not match request aaaccc https github com kubernetes ingress nginx tree main test e2e ingress pathtype prefix go L35 should test prefix path using simple regex pattern for id int https github com kubernetes ingress nginx tree main test e2e ingress pathtype prefix go L72 should test prefix path using regex pattern for id int ignoring non digits characters at end of string https github com kubernetes ingress nginx tree main test e2e ingress pathtype prefix go L113 should test prefix path using fixed path size regex pattern id int 3 https github com kubernetes ingress nginx tree main test e2e ingress pathtype prefix go L142 should correctly route multi segment path patterns https github com kubernetes ingress nginx tree main test e2e ingress pathtype prefix go L177 Ingress definition without host https github com kubernetes ingress nginx tree main test e2e ingress without host go L31 should set ingress details variables for ingresses without a host https github com kubernetes ingress nginx tree main test e2e ingress without host go L34 should set ingress details variables for ingresses with host without IngressRuleValue only Backend https github com kubernetes ingress nginx tree main test e2e ingress without host go L55 Memory Leak Dynamic Certificates https github com kubernetes ingress nginx tree main test e2e leaks lua ssl go L35 should not leak memory from ingress SSL certificates or configuration updates https github com kubernetes ingress nginx tree main test e2e leaks lua ssl go L42 Load Balancer load balance https github com kubernetes ingress nginx tree main test e2e loadbalance configmap go L30 should apply the configmap load balance setting https github com kubernetes ingress nginx tree main test e2e loadbalance configmap go L37 Load Balancer EWMA https github com kubernetes ingress nginx tree main test e2e loadbalance ewma go L31 does not fail requests https github com kubernetes ingress nginx tree main test e2e loadbalance ewma go L43 Load Balancer round robin https github com kubernetes ingress nginx tree main test e2e loadbalance round robin go L31 should evenly distribute requests with round robin default algorithm https github com kubernetes ingress nginx tree main test e2e loadbalance round robin go L39 Lua dynamic certificates https github com kubernetes ingress nginx tree main test e2e lua dynamic certificates go L37 picks up the certificate when we add TLS spec to existing ingress https github com kubernetes ingress nginx tree main test e2e lua dynamic certificates go L45 picks up the previously missing secret for a given ingress without reloading https github com kubernetes ingress nginx tree main test e2e lua dynamic certificates go L70 supports requests with domain with trailing dot https github com kubernetes ingress nginx tree main test e2e lua dynamic certificates go L145 picks up the updated certificate without reloading https github com kubernetes ingress nginx tree main test e2e lua dynamic certificates go L149 falls back to using default certificate when secret gets deleted without reloading https github com kubernetes ingress nginx tree main test e2e lua dynamic certificates go L185 picks up a non certificate only change https github com kubernetes ingress nginx tree main test e2e lua dynamic certificates go L218 removes HTTPS configuration when we delete TLS spec https github com kubernetes ingress nginx tree main test e2e lua dynamic certificates go L233 Lua dynamic configuration https github com kubernetes ingress nginx tree main test e2e lua dynamic configuration go L41 configures balancer Lua middleware correctly https github com kubernetes ingress nginx tree main test e2e lua dynamic configuration go L49 handles endpoints only changes https github com kubernetes ingress nginx tree main test e2e lua dynamic configuration go L56 handles endpoints only changes down scaling of replicas https github com kubernetes ingress nginx tree main test e2e lua dynamic configuration go L81 handles endpoints only changes consistently down scaling of replicas vs empty service https github com kubernetes ingress nginx tree main test e2e lua dynamic configuration go L119 handles an annotation change https github com kubernetes ingress nginx tree main test e2e lua dynamic configuration go L165 metrics exported prometheus metrics https github com kubernetes ingress nginx tree main test e2e metrics metrics go L36 exclude socket request metrics are absent https github com kubernetes ingress nginx tree main test e2e metrics metrics go L51 exclude socket request metrics are present https github com kubernetes ingress nginx tree main test e2e metrics metrics go L73 request metrics per undefined host are present when flag is set https github com kubernetes ingress nginx tree main test e2e metrics metrics go L95 request metrics per undefined host are not present when flag is not set https github com kubernetes ingress nginx tree main test e2e metrics metrics go L128 nginx configuration https github com kubernetes ingress nginx tree main test e2e nginx nginx go L99 start nginx with default configuration https github com kubernetes ingress nginx tree main test e2e nginx nginx go L102 fails when using alias directive https github com kubernetes ingress nginx tree main test e2e nginx nginx go L114 fails when using root directive https github com kubernetes ingress nginx tree main test e2e nginx nginx go L121 Security request smuggling https github com kubernetes ingress nginx tree main test e2e security request smuggling go L32 should not return body content from error page https github com kubernetes ingress nginx tree main test e2e security request smuggling go L39 Service backend status code 503 https github com kubernetes ingress nginx tree main test e2e servicebackend service backend go L34 should return 503 when backend service does not exist https github com kubernetes ingress nginx tree main test e2e servicebackend service backend go L37 should return 503 when all backend service endpoints are unavailable https github com kubernetes ingress nginx tree main test e2e servicebackend service backend go L55 Service Type ExternalName https github com kubernetes ingress nginx tree main test e2e servicebackend service externalname go L38 works with external name set to incomplete fqdn https github com kubernetes ingress nginx tree main test e2e servicebackend service externalname go L41 should return 200 for service type ExternalName without a port defined https github com kubernetes ingress nginx tree main test e2e servicebackend service externalname go L78 should return 200 for service type ExternalName with a port defined https github com kubernetes ingress nginx tree main test e2e servicebackend service externalname go L118 should return status 502 for service type ExternalName with an invalid host https github com kubernetes ingress nginx tree main test e2e servicebackend service externalname go L148 should return 200 for service type ExternalName using a port name https github com kubernetes ingress nginx tree main test e2e servicebackend service externalname go L184 should return 200 for service type ExternalName using FQDN with trailing dot https github com kubernetes ingress nginx tree main test e2e servicebackend service externalname go L225 should update the external name after a service update https github com kubernetes ingress nginx tree main test e2e servicebackend service externalname go L261 should sync ingress on external name service addition deletion https github com kubernetes ingress nginx tree main test e2e servicebackend service externalname go L344 Service Nil Service Backend https github com kubernetes ingress nginx tree main test e2e servicebackend service nil backend go L31 should return 404 when backend service is nil https github com kubernetes ingress nginx tree main test e2e servicebackend service nil backend go L38 access log https github com kubernetes ingress nginx tree main test e2e settings access log go L27 use the default configuration https github com kubernetes ingress nginx tree main test e2e settings access log go L31 use the specified configuration https github com kubernetes ingress nginx tree main test e2e settings access log go L41 use the specified configuration https github com kubernetes ingress nginx tree main test e2e settings access log go L52 use the specified configuration https github com kubernetes ingress nginx tree main test e2e settings access log go L64 use the specified configuration https github com kubernetes ingress nginx tree main test e2e settings access log go L76 aio write https github com kubernetes ingress nginx tree main test e2e settings aio write go L27 should be enabled by default https github com kubernetes ingress nginx tree main test e2e settings aio write go L30 should be enabled when setting is true https github com kubernetes ingress nginx tree main test e2e settings aio write go L37 should be disabled when setting is false https github com kubernetes ingress nginx tree main test e2e settings aio write go L46 Bad annotation values https github com kubernetes ingress nginx tree main test e2e settings badannotationvalues go L29 BAD ANNOTATIONS should drop an ingress if there is an invalid character in some annotation https github com kubernetes ingress nginx tree main test e2e settings badannotationvalues go L36 BAD ANNOTATIONS should drop an ingress if there is a forbidden word in some annotation https github com kubernetes ingress nginx tree main test e2e settings badannotationvalues go L68 BAD ANNOTATIONS should allow an ingress if there is a default blocklist config in place https github com kubernetes ingress nginx tree main test e2e settings badannotationvalues go L105 BAD ANNOTATIONS should drop an ingress if there is a custom blocklist config in place and allow others to pass https github com kubernetes ingress nginx tree main test e2e settings badannotationvalues go L138 brotli https github com kubernetes ingress nginx tree main test e2e settings brotli go L30 should only compress responses that meet the brotli min length condition https github com kubernetes ingress nginx tree main test e2e settings brotli go L38 Configmap change https github com kubernetes ingress nginx tree main test e2e settings configmap change go L29 should reload after an update in the configuration https github com kubernetes ingress nginx tree main test e2e settings configmap change go L36 add headers https github com kubernetes ingress nginx tree main test e2e settings custom header go L30 Add a custom header https github com kubernetes ingress nginx tree main test e2e settings custom header go L40 Add multiple custom headers https github com kubernetes ingress nginx tree main test e2e settings custom header go L65 SSL Flag default ssl certificate https github com kubernetes ingress nginx tree main test e2e settings default ssl certificate go L35 uses default ssl certificate for catch all ingress https github com kubernetes ingress nginx tree main test e2e settings default ssl certificate go L66 uses default ssl certificate for host based ingress when configured certificate does not match host https github com kubernetes ingress nginx tree main test e2e settings default ssl certificate go L82 Flag disable catch all https github com kubernetes ingress nginx tree main test e2e settings disable catch all go L33 should ignore catch all Ingress with backend https github com kubernetes ingress nginx tree main test e2e settings disable catch all go L50 should ignore catch all Ingress with backend and rules https github com kubernetes ingress nginx tree main test e2e settings disable catch all go L69 should delete Ingress updated to catch all https github com kubernetes ingress nginx tree main test e2e settings disable catch all go L81 should allow Ingress with rules https github com kubernetes ingress nginx tree main test e2e settings disable catch all go L123 Flag disable service external name https github com kubernetes ingress nginx tree main test e2e settings disable service external name go L35 should ignore services of external name type https github com kubernetes ingress nginx tree main test e2e settings disable service external name go L55 Flag disable sync events https github com kubernetes ingress nginx tree main test e2e settings disable sync events go L32 should create sync events default https github com kubernetes ingress nginx tree main test e2e settings disable sync events go L35 should create sync events https github com kubernetes ingress nginx tree main test e2e settings disable sync events go L55 should not create sync events https github com kubernetes ingress nginx tree main test e2e settings disable sync events go L83 enable real ip https github com kubernetes ingress nginx tree main test e2e settings enable real ip go L30 trusts X Forwarded For header only when setting is true https github com kubernetes ingress nginx tree main test e2e settings enable real ip go L40 should not trust X Forwarded For header when setting is false https github com kubernetes ingress nginx tree main test e2e settings enable real ip go L79 use forwarded headers https github com kubernetes ingress nginx tree main test e2e settings forwarded headers go L31 should trust X Forwarded headers when setting is true https github com kubernetes ingress nginx tree main test e2e settings forwarded headers go L41 should not trust X Forwarded headers when setting is false https github com kubernetes ingress nginx tree main test e2e settings forwarded headers go L93 Geoip2 https github com kubernetes ingress nginx tree main test e2e settings geoip2 go L36 should include geoip2 line in config when enabled and db file exists https github com kubernetes ingress nginx tree main test e2e settings geoip2 go L45 should only allow requests from specific countries https github com kubernetes ingress nginx tree main test e2e settings geoip2 go L69 should up and running nginx controller using autoreload flag https github com kubernetes ingress nginx tree main test e2e settings geoip2 go L122 Security block https github com kubernetes ingress nginx tree main test e2e settings global access block go L28 should block CIDRs defined in the ConfigMap https github com kubernetes ingress nginx tree main test e2e settings global access block go L38 should block User Agents defined in the ConfigMap https github com kubernetes ingress nginx tree main test e2e settings global access block go L55 should block Referers defined in the ConfigMap https github com kubernetes ingress nginx tree main test e2e settings global access block go L88 Security global auth url https github com kubernetes ingress nginx tree main test e2e settings global external auth go L39 should return status code 401 when request any protected service https github com kubernetes ingress nginx tree main test e2e settings global external auth go L91 should return status code 200 when request whitelisted via no auth locations service and 401 when request protected service https github com kubernetes ingress nginx tree main test e2e settings global external auth go L107 should return status code 200 when request whitelisted via ingress annotation service and 401 when request protected service https github com kubernetes ingress nginx tree main test e2e settings global external auth go L130 should still return status code 200 after auth backend is deleted using cache https github com kubernetes ingress nginx tree main test e2e settings global external auth go L158 user retains cookie by default https github com kubernetes ingress nginx tree main test e2e settings global external auth go L322 user does not retain cookie if upstream returns error status code https github com kubernetes ingress nginx tree main test e2e settings global external auth go L333 user with global auth always set cookie key in configmap retains cookie if upstream returns error status code https github com kubernetes ingress nginx tree main test e2e settings global external auth go L344 global options https github com kubernetes ingress nginx tree main test e2e settings global options go L28 should have worker rlimit nofile option https github com kubernetes ingress nginx tree main test e2e settings global options go L31 should have worker rlimit nofile option and be independent on amount of worker processes https github com kubernetes ingress nginx tree main test e2e settings global options go L37 GRPC https github com kubernetes ingress nginx tree main test e2e settings grpc go L39 should set the correct GRPC Buffer Size https github com kubernetes ingress nginx tree main test e2e settings grpc go L42 gzip https github com kubernetes ingress nginx tree main test e2e settings gzip go L30 should be disabled by default https github com kubernetes ingress nginx tree main test e2e settings gzip go L40 should be enabled with default settings https github com kubernetes ingress nginx tree main test e2e settings gzip go L56 should set gzip comp level to 4 https github com kubernetes ingress nginx tree main test e2e settings gzip go L82 should set gzip disable to msie6 https github com kubernetes ingress nginx tree main test e2e settings gzip go L102 should set gzip min length to 100 https github com kubernetes ingress nginx tree main test e2e settings gzip go L132 should set gzip types to text html https github com kubernetes ingress nginx tree main test e2e settings gzip go L164 hash size https github com kubernetes ingress nginx tree main test e2e settings hash size go L27 should set server names hash bucket size https github com kubernetes ingress nginx tree main test e2e settings hash size go L39 should set server names hash max size https github com kubernetes ingress nginx tree main test e2e settings hash size go L47 should set proxy headers hash bucket size https github com kubernetes ingress nginx tree main test e2e settings hash size go L57 should set proxy headers hash max size https github com kubernetes ingress nginx tree main test e2e settings hash size go L65 should set variables hash bucket size https github com kubernetes ingress nginx tree main test e2e settings hash size go L75 should set variables hash max size https github com kubernetes ingress nginx tree main test e2e settings hash size go L83 should set vmap hash bucket size https github com kubernetes ingress nginx tree main test e2e settings hash size go L93 Flag ingress class https github com kubernetes ingress nginx tree main test e2e settings ingress class go L41 should ignore Ingress with a different class annotation https github com kubernetes ingress nginx tree main test e2e settings ingress class go L70 should ignore Ingress with different controller class https github com kubernetes ingress nginx tree main test e2e settings ingress class go L106 should accept both Ingresses with default IngressClassName and IngressClass annotation https github com kubernetes ingress nginx tree main test e2e settings ingress class go L134 should ignore Ingress without IngressClass configuration https github com kubernetes ingress nginx tree main test e2e settings ingress class go L166 should delete Ingress when class is removed https github com kubernetes ingress nginx tree main test e2e settings ingress class go L194 should serve Ingress when class is added https github com kubernetes ingress nginx tree main test e2e settings ingress class go L259 should serve Ingress when class is updated between annotation and ingressClassName https github com kubernetes ingress nginx tree main test e2e settings ingress class go L325 should ignore Ingress with no class and accept the correctly configured Ingresses https github com kubernetes ingress nginx tree main test e2e settings ingress class go L414 should watch Ingress with no class and ignore ingress with a different class https github com kubernetes ingress nginx tree main test e2e settings ingress class go L482 should watch Ingress that uses the class name even if spec is different https github com kubernetes ingress nginx tree main test e2e settings ingress class go L538 should watch Ingress with correct annotation https github com kubernetes ingress nginx tree main test e2e settings ingress class go L628 should ignore Ingress with only IngressClassName https github com kubernetes ingress nginx tree main test e2e settings ingress class go L648 keep alive keep alive requests https github com kubernetes ingress nginx tree main test e2e settings keep alive go L28 should set keepalive timeout https github com kubernetes ingress nginx tree main test e2e settings keep alive go L40 should set keepalive requests https github com kubernetes ingress nginx tree main test e2e settings keep alive go L48 should set keepalive connection to upstream server https github com kubernetes ingress nginx tree main test e2e settings keep alive go L58 should set keep alive connection timeout to upstream server https github com kubernetes ingress nginx tree main test e2e settings keep alive go L68 should set keepalive time to upstream server https github com kubernetes ingress nginx tree main test e2e settings keep alive go L78 should set the request count to upstream server through one keep alive connection https github com kubernetes ingress nginx tree main test e2e settings keep alive go L88 Configmap limit rate https github com kubernetes ingress nginx tree main test e2e settings limit rate go L28 Check limit rate config https github com kubernetes ingress nginx tree main test e2e settings limit rate go L36 Flag custom HTTP and HTTPS ports https github com kubernetes ingress nginx tree main test e2e settings listen nondefault ports go L30 should set X Forwarded Port headers accordingly when listening on a non default HTTP port https github com kubernetes ingress nginx tree main test e2e settings listen nondefault ports go L45 should set X Forwarded Port header to 443 https github com kubernetes ingress nginx tree main test e2e settings listen nondefault ports go L65 should set the X Forwarded Port header to 443 https github com kubernetes ingress nginx tree main test e2e settings listen nondefault ports go L93 log format https github com kubernetes ingress nginx tree main test e2e settings log format go L28 should not configure log format escape by default https github com kubernetes ingress nginx tree main test e2e settings log format go L39 should enable the log format escape json https github com kubernetes ingress nginx tree main test e2e settings log format go L46 should disable the log format escape json https github com kubernetes ingress nginx tree main test e2e settings log format go L54 should enable the log format escape none https github com kubernetes ingress nginx tree main test e2e settings log format go L62 should disable the log format escape none https github com kubernetes ingress nginx tree main test e2e settings log format go L70 log format escape json enabled https github com kubernetes ingress nginx tree main test e2e settings log format go L80 log format default escape https github com kubernetes ingress nginx tree main test e2e settings log format go L103 log format escape none enabled https github com kubernetes ingress nginx tree main test e2e settings log format go L126 Lua lua shared dicts https github com kubernetes ingress nginx tree main test e2e settings lua shared dicts go L26 configures lua shared dicts https github com kubernetes ingress nginx tree main test e2e settings lua shared dicts go L29 main snippet https github com kubernetes ingress nginx tree main test e2e settings main snippet go L27 should add value of main snippet setting to nginx config https github com kubernetes ingress nginx tree main test e2e settings main snippet go L31 Security modsecurity snippet https github com kubernetes ingress nginx tree main test e2e settings modsecurity modsecurity snippet go L27 should add value of modsecurity snippet setting to nginx config https github com kubernetes ingress nginx tree main test e2e settings modsecurity modsecurity snippet go L30 enable multi accept https github com kubernetes ingress nginx tree main test e2e settings multi accept go L27 should be enabled by default https github com kubernetes ingress nginx tree main test e2e settings multi accept go L31 should be enabled when set to true https github com kubernetes ingress nginx tree main test e2e settings multi accept go L39 should be disabled when set to false https github com kubernetes ingress nginx tree main test e2e settings multi accept go L49 Flag watch namespace selector https github com kubernetes ingress nginx tree main test e2e settings namespace selector go L30 should ignore Ingress of namespace without label foo bar and accept those of namespace with label foo bar https github com kubernetes ingress nginx tree main test e2e settings namespace selector go L62 Security no auth locations https github com kubernetes ingress nginx tree main test e2e settings no auth locations go L33 should return status code 401 when accessing unauthentication https github com kubernetes ingress nginx tree main test e2e settings no auth locations go L54 should return status code 200 when accessing authentication https github com kubernetes ingress nginx tree main test e2e settings no auth locations go L68 should return status code 200 when accessing noauth unauthenticated https github com kubernetes ingress nginx tree main test e2e settings no auth locations go L82 Add no tls redirect locations https github com kubernetes ingress nginx tree main test e2e settings no tls redirect locations go L27 Check no tls redirect locations config https github com kubernetes ingress nginx tree main test e2e settings no tls redirect locations go L30 OCSP https github com kubernetes ingress nginx tree main test e2e settings ocsp ocsp go L43 should enable OCSP and contain stapling information in the connection https github com kubernetes ingress nginx tree main test e2e settings ocsp ocsp go L50 Configure Opentelemetry https github com kubernetes ingress nginx tree main test e2e settings opentelemetry go L39 should not exists opentelemetry directive https github com kubernetes ingress nginx tree main test e2e settings opentelemetry go L49 should exists opentelemetry directive when is enabled https github com kubernetes ingress nginx tree main test e2e settings opentelemetry go L62 should include opentelemetry trust incoming spans on directive when enabled https github com kubernetes ingress nginx tree main test e2e settings opentelemetry go L76 should not exists opentelemetry operation name directive when is empty https github com kubernetes ingress nginx tree main test e2e settings opentelemetry go L91 should exists opentelemetry operation name directive when is configured https github com kubernetes ingress nginx tree main test e2e settings opentelemetry go L106 proxy connect timeout https github com kubernetes ingress nginx tree main test e2e settings proxy connect timeout go L29 should set valid proxy timeouts using configmap values https github com kubernetes ingress nginx tree main test e2e settings proxy connect timeout go L37 should not set invalid proxy timeouts using configmap values https github com kubernetes ingress nginx tree main test e2e settings proxy connect timeout go L53 Dynamic proxy host https github com kubernetes ingress nginx tree main test e2e settings proxy host go L28 should exist a proxy host https github com kubernetes ingress nginx tree main test e2e settings proxy host go L36 should exist a proxy host using the upstream vhost annotation value https github com kubernetes ingress nginx tree main test e2e settings proxy host go L60 proxy next upstream https github com kubernetes ingress nginx tree main test e2e settings proxy next upstream go L28 should build proxy next upstream using configmap values https github com kubernetes ingress nginx tree main test e2e settings proxy next upstream go L36 use proxy protocol https github com kubernetes ingress nginx tree main test e2e settings proxy protocol go L38 should respect port passed by the PROXY Protocol https github com kubernetes ingress nginx tree main test e2e settings proxy protocol go L48 should respect proto passed by the PROXY Protocol server port https github com kubernetes ingress nginx tree main test e2e settings proxy protocol go L85 should enable PROXY Protocol for HTTPS https github com kubernetes ingress nginx tree main test e2e settings proxy protocol go L121 should enable PROXY Protocol for TCP https github com kubernetes ingress nginx tree main test e2e settings proxy protocol go L164 proxy read timeout https github com kubernetes ingress nginx tree main test e2e settings proxy read timeout go L29 should set valid proxy read timeouts using configmap values https github com kubernetes ingress nginx tree main test e2e settings proxy read timeout go L37 should not set invalid proxy read timeouts using configmap values https github com kubernetes ingress nginx tree main test e2e settings proxy read timeout go L53 proxy send timeout https github com kubernetes ingress nginx tree main test e2e settings proxy send timeout go L29 should set valid proxy send timeouts using configmap values https github com kubernetes ingress nginx tree main test e2e settings proxy send timeout go L37 should not set invalid proxy send timeouts using configmap values https github com kubernetes ingress nginx tree main test e2e settings proxy send timeout go L53 reuse port https github com kubernetes ingress nginx tree main test e2e settings reuse port go L27 reuse port should be enabled by default https github com kubernetes ingress nginx tree main test e2e settings reuse port go L38 reuse port should be disabled https github com kubernetes ingress nginx tree main test e2e settings reuse port go L44 reuse port should be enabled https github com kubernetes ingress nginx tree main test e2e settings reuse port go L52 configmap server snippet https github com kubernetes ingress nginx tree main test e2e settings server snippet go L28 should add value of server snippet setting to all ingress config https github com kubernetes ingress nginx tree main test e2e settings server snippet go L35 should add global server snippet and drop annotations per admin config https github com kubernetes ingress nginx tree main test e2e settings server snippet go L100 server tokens https github com kubernetes ingress nginx tree main test e2e settings server tokens go L29 should not exists Server header in the response https github com kubernetes ingress nginx tree main test e2e settings server tokens go L38 should exists Server header in the response when is enabled https github com kubernetes ingress nginx tree main test e2e settings server tokens go L50 ssl ciphers https github com kubernetes ingress nginx tree main test e2e settings ssl ciphers go L28 Add ssl ciphers https github com kubernetes ingress nginx tree main test e2e settings ssl ciphers go L31 Flag enable ssl passthrough https github com kubernetes ingress nginx tree main test e2e settings ssl passthrough go L36 With enable ssl passthrough enabled https github com kubernetes ingress nginx tree main test e2e settings ssl passthrough go L55 should enable ssl passthrough proxy port on a different port https github com kubernetes ingress nginx tree main test e2e settings ssl passthrough go L56 should pass unknown traffic to default backend and handle known traffic https github com kubernetes ingress nginx tree main test e2e settings ssl passthrough go L78 configmap stream snippet https github com kubernetes ingress nginx tree main test e2e settings stream snippet go L35 should add value of stream snippet via config map to nginx config https github com kubernetes ingress nginx tree main test e2e settings stream snippet go L42 SSL TLS protocols ciphers and headers https github com kubernetes ingress nginx tree main test e2e settings tls go L32 setting cipher suite https github com kubernetes ingress nginx tree main test e2e settings tls go L66 setting max age parameter https github com kubernetes ingress nginx tree main test e2e settings tls go L110 setting includeSubDomains parameter https github com kubernetes ingress nginx tree main test e2e settings tls go L127 setting preload parameter https github com kubernetes ingress nginx tree main test e2e settings tls go L147 overriding what s set from the upstream https github com kubernetes ingress nginx tree main test e2e settings tls go L168 should not use ports during the HTTP to HTTPS redirection https github com kubernetes ingress nginx tree main test e2e settings tls go L190 should not use ports or X Forwarded Host during the HTTP to HTTPS redirection https github com kubernetes ingress nginx tree main test e2e settings tls go L208 annotation validations https github com kubernetes ingress nginx tree main test e2e settings validations validations go L30 should allow ingress based on their risk on webhooks https github com kubernetes ingress nginx tree main test e2e settings validations validations go L33 should allow ingress based on their risk on webhooks https github com kubernetes ingress nginx tree main test e2e settings validations validations go L68 SSL redirect to HTTPS https github com kubernetes ingress nginx tree main test e2e ssl http redirect go L29 should redirect from HTTP to HTTPS when secret is missing https github com kubernetes ingress nginx tree main test e2e ssl http redirect go L36 SSL secret update https github com kubernetes ingress nginx tree main test e2e ssl secret update go L33 should not appear references to secret updates not used in ingress rules https github com kubernetes ingress nginx tree main test e2e ssl secret update go L40 should return the fake SSL certificate if the secret is invalid https github com kubernetes ingress nginx tree main test e2e ssl secret update go L83 Status status update https github com kubernetes ingress nginx tree main test e2e status update go L38 should update status field after client go reconnection https github com kubernetes ingress nginx tree main test e2e status update go L43 TCP tcp services https github com kubernetes ingress nginx tree main test e2e tcpudp tcp go L38 should expose a TCP service https github com kubernetes ingress nginx tree main test e2e tcpudp tcp go L46 should expose an ExternalName TCP service https github com kubernetes ingress nginx tree main test e2e tcpudp tcp go L80 should reload after an update in the configuration https github com kubernetes ingress nginx tree main test e2e tcpudp tcp go L169 |
ingress nginx FAQ For example the Ingress NGINX control plane has global and per Ingress configuration options that make it insecure if enabled in a multi tenant environment For example enabling snippets a global configuration allows any Ingress object to run arbitrary Lua code that could affect the security of all Ingress objects that a controller is running Do not use in multi tenant Kubernetes production installations This project assumes that users that can create Ingress objects are administrators of the cluster Multi tenant Kubernetes |
# FAQ
## Multi-tenant Kubernetes
Do not use in multi-tenant Kubernetes production installations. This project assumes that users that can create Ingress objects are administrators of the cluster.
For example, the Ingress NGINX control plane has global and per Ingress configuration options that make it insecure, if enabled, in a multi-tenant environment.
For example, enabling snippets, a global configuration, allows any Ingress object to run arbitrary Lua code that could affect the security of all Ingress objects that a controller is running.
We changed the default to allow snippets to `false` in https://github.com/kubernetes/ingress-nginx/pull/10393.
## Multiple controller in one cluster
Question - How can I easily install multiple instances of the ingress-nginx controller in the same cluster?
You can install them in different namespaces.
- Create a new namespace
```
kubectl create namespace ingress-nginx-2
```
- Use Helm to install the additional instance of the ingress controller
- Ensure you have Helm working (refer to the [Helm documentation](https://helm.sh/docs/))
- We have to assume that you have the helm repo for the ingress-nginx controller already added to your Helm config.
But, if you have not added the helm repo then you can do this to add the repo to your helm config;
```
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
```
- Make sure you have updated the helm repo data;
```
helm repo update
```
- Now, install an additional instance of the ingress-nginx controller like this:
```
helm install ingress-nginx-2 ingress-nginx/ingress-nginx \
--namespace ingress-nginx-2 \
--set controller.ingressClassResource.name=nginx-two \
--set controller.ingressClass=nginx-two \
--set controller.ingressClassResource.controllerValue="example.com/ingress-nginx-2" \
--set controller.ingressClassResource.enabled=true \
--set controller.ingressClassByName=true
```
If you need to install yet another instance, then repeat the procedure to create a new namespace,
change the values such as names & namespaces (for example from "-2" to "-3"), or anything else that meets your needs.
Note that `controller.ingressClassResource.name` and `controller.ingressClass` have to be set correctly.
The first is to create the IngressClass object and the other is to modify the deployment of the actual ingress controller pod.
### I can't use multiple namespaces, what should I do?
If you need to install all instances in the same namespace, then you need to specify a different **election id**, like this:
```
helm install ingress-nginx-2 ingress-nginx/ingress-nginx \
--namespace kube-system \
--set controller.electionID=nginx-two-leader \
--set controller.ingressClassResource.name=nginx-two \
--set controller.ingressClass=nginx-two \
--set controller.ingressClassResource.controllerValue="example.com/ingress-nginx-2" \
--set controller.ingressClassResource.enabled=true \
--set controller.ingressClassByName=true
```
## Retaining Client IPAddress
Question - How to obtain the real-client-ipaddress ?
The goto solution for retaining the real-client IPaddress is to enable PROXY protocol.
Enabling PROXY protocol has to be done on both, the Ingress NGINX controller, as well as the L4 load balancer, in front of the controller.
The real-client IP address is lost by default, when traffic is forwarded over the network. But enabling PROXY protocol ensures that the connection details are retained and hence the real-client IP address doesn't get lost.
Enabling proxy-protocol on the controller is documented [here](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol) .
For enabling proxy-protocol on the LoadBalancer, please refer to the documentation of your infrastructure provider because that is where the LB is provisioned.
Some more info available [here](https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#source-ip-address)
Some more info on proxy-protocol is [here](https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol)
### client-ipaddress on single-node cluster
Single node clusters are created for dev & test uses with tools like "kind" or "minikube". A trick to simulate a real use network with these clusters (kind or minikube) is to install Metallb and configure the ipaddress of the kind container or the minikube vm/container, as the starting and ending of the pool for Metallb in L2 mode. Then the host ip becomes a real client ipaddress, for curl requests sent from the host.
After installing ingress-nginx controller on a kind or a minikube cluster with helm, you can configure it for real-client-ip with a simple change to the service that ingress-nginx controller creates. The service object of --type LoadBalancer has a field service.spec.externalTrafficPolicy. If you set the value of this field to "Local" then the real-ipaddress of a client is visible to the controller.
```
% kubectl explain service.spec.externalTrafficPolicy
KIND: Service
VERSION: v1
FIELD: externalTrafficPolicy <string>
DESCRIPTION:
externalTrafficPolicy describes how nodes distribute service traffic they
receive on one of the Service's "externally-facing" addresses (NodePorts,
ExternalIPs, and LoadBalancer IPs). If set to "Local", the proxy will
configure the service in a way that assumes that external load balancers
will take care of balancing the service traffic between nodes, and so each
node will deliver traffic only to the node-local endpoints of the service,
without masquerading the client source IP. (Traffic mistakenly sent to a
node with no endpoints will be dropped.) The default value, "Cluster", uses
the standard behavior of routing to all endpoints evenly (possibly modified
by topology and other features). Note that traffic sent to an External IP or
LoadBalancer IP from within the cluster will always get "Cluster" semantics,
but clients sending to a NodePort from within the cluster may need to take
traffic policy into account when picking a node.
Possible enum values:
- `"Cluster"` routes traffic to all endpoints.
- `"Local"` preserves the source IP of the traffic by routing only to
endpoints on the same node as the traffic was received on (dropping the
traffic if there are no local endpoints).
```
### client-ipaddress L7
The solution is to get the real client IPaddress from the ["X-Forward-For" HTTP header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For)
Example : If your application pod behind Ingress NGINX controller, uses the NGINX webserver and the reverseproxy inside it, then you can do the following to preserve the remote client IP.
- First you need to make sure that the X-Forwarded-For header reaches the backend pod. This is done by using a Ingress NGINX conftroller ConfigMap key. Its documented [here](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers)
- Next, edit `nginx.conf` file inside your app pod, to contain the directives shown below:
```
set_real_ip_from 0.0.0.0/0; # Trust all IPs (use your VPC CIDR block in production)
real_ip_header X-Forwarded-For;
real_ip_recursive on;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" '
'host=$host x-forwarded-for=$http_x_forwarded_for';
access_log /var/log/nginx/access.log main;
```
## Kubernetes v1.22 Migration
If you are using Ingress objects in your cluster (running Kubernetes older than
version 1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or
above, then please read [the migration guide here](./user-guide/k8s-122-migration.md).
## Validation Of **`path`**
- For improving security and also following desired standards on Kubernetes API
spec, the next release, scheduled for v1.8.0, will include a new & optional
feature of validating the value for the key `ingress.spec.rules.http.paths.path`.
- This behavior will be disabled by default on the 1.8.0 release and enabled by
default on the next breaking change release, set for 2.0.0.
- When "`ingress.spec.rules.http.pathType=Exact`" or "`pathType=Prefix`", this
validation will limit the characters accepted on the field "`ingress.spec.rules.http.paths.path`",
to "`alphanumeric characters`", and `"/," "_," "-."` Also, in this case,
the path should start with `"/."`
- When the ingress resource path contains other characters (like on rewrite
configurations), the pathType value should be "`ImplementationSpecific`".
- API Spec on pathType is documented [here](https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types)
- When this option is enabled, the validation will happen on the Admission
Webhook. So if any new ingress object contains characters other than
alphanumeric characters, and, `"/,","_","-"`, in the `path` field, but
is not using `pathType` value as `ImplementationSpecific`, then the ingress
object will be denied admission.
- The cluster admin should establish validation rules using mechanisms like
"`Open Policy Agent`", to validate that only authorized users can use
ImplementationSpecific pathType and that only the authorized characters can be
used. [The configmap value is here](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#strict-validate-path-type)
- A complete example of an Openpolicyagent gatekeeper rule is available [here](https://kubernetes.github.io/ingress-nginx/examples/openpolicyagent/)
- If you have any issues or concerns, please do one of the following:
- Open a GitHub issue
- Comment in our Dev Slack Channel
- Open a thread in our Google Group <[email protected]>
## Why is chunking not working since controller v1.10 ?
- If your code is setting the HTTP header `"Transfer-Encoding: chunked"` and
the controller log messages show an error about duplicate header, it is
because of this change <http://hg.nginx.org/nginx/rev/2bf7792c262e>
- More details are available in this issue <https://github.com/kubernetes/ingress-nginx/issues/11162> | ingress nginx | FAQ Multi tenant Kubernetes Do not use in multi tenant Kubernetes production installations This project assumes that users that can create Ingress objects are administrators of the cluster For example the Ingress NGINX control plane has global and per Ingress configuration options that make it insecure if enabled in a multi tenant environment For example enabling snippets a global configuration allows any Ingress object to run arbitrary Lua code that could affect the security of all Ingress objects that a controller is running We changed the default to allow snippets to false in https github com kubernetes ingress nginx pull 10393 Multiple controller in one cluster Question How can I easily install multiple instances of the ingress nginx controller in the same cluster You can install them in different namespaces Create a new namespace kubectl create namespace ingress nginx 2 Use Helm to install the additional instance of the ingress controller Ensure you have Helm working refer to the Helm documentation https helm sh docs We have to assume that you have the helm repo for the ingress nginx controller already added to your Helm config But if you have not added the helm repo then you can do this to add the repo to your helm config helm repo add ingress nginx https kubernetes github io ingress nginx Make sure you have updated the helm repo data helm repo update Now install an additional instance of the ingress nginx controller like this helm install ingress nginx 2 ingress nginx ingress nginx namespace ingress nginx 2 set controller ingressClassResource name nginx two set controller ingressClass nginx two set controller ingressClassResource controllerValue example com ingress nginx 2 set controller ingressClassResource enabled true set controller ingressClassByName true If you need to install yet another instance then repeat the procedure to create a new namespace change the values such as names namespaces for example from 2 to 3 or anything else that meets your needs Note that controller ingressClassResource name and controller ingressClass have to be set correctly The first is to create the IngressClass object and the other is to modify the deployment of the actual ingress controller pod I can t use multiple namespaces what should I do If you need to install all instances in the same namespace then you need to specify a different election id like this helm install ingress nginx 2 ingress nginx ingress nginx namespace kube system set controller electionID nginx two leader set controller ingressClassResource name nginx two set controller ingressClass nginx two set controller ingressClassResource controllerValue example com ingress nginx 2 set controller ingressClassResource enabled true set controller ingressClassByName true Retaining Client IPAddress Question How to obtain the real client ipaddress The goto solution for retaining the real client IPaddress is to enable PROXY protocol Enabling PROXY protocol has to be done on both the Ingress NGINX controller as well as the L4 load balancer in front of the controller The real client IP address is lost by default when traffic is forwarded over the network But enabling PROXY protocol ensures that the connection details are retained and hence the real client IP address doesn t get lost Enabling proxy protocol on the controller is documented here https kubernetes github io ingress nginx user guide nginx configuration configmap use proxy protocol For enabling proxy protocol on the LoadBalancer please refer to the documentation of your infrastructure provider because that is where the LB is provisioned Some more info available here https kubernetes github io ingress nginx user guide miscellaneous source ip address Some more info on proxy protocol is here https kubernetes github io ingress nginx user guide miscellaneous proxy protocol client ipaddress on single node cluster Single node clusters are created for dev test uses with tools like kind or minikube A trick to simulate a real use network with these clusters kind or minikube is to install Metallb and configure the ipaddress of the kind container or the minikube vm container as the starting and ending of the pool for Metallb in L2 mode Then the host ip becomes a real client ipaddress for curl requests sent from the host After installing ingress nginx controller on a kind or a minikube cluster with helm you can configure it for real client ip with a simple change to the service that ingress nginx controller creates The service object of type LoadBalancer has a field service spec externalTrafficPolicy If you set the value of this field to Local then the real ipaddress of a client is visible to the controller kubectl explain service spec externalTrafficPolicy KIND Service VERSION v1 FIELD externalTrafficPolicy string DESCRIPTION externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service s externally facing addresses NodePorts ExternalIPs and LoadBalancer IPs If set to Local the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes and so each node will deliver traffic only to the node local endpoints of the service without masquerading the client source IP Traffic mistakenly sent to a node with no endpoints will be dropped The default value Cluster uses the standard behavior of routing to all endpoints evenly possibly modified by topology and other features Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get Cluster semantics but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node Possible enum values Cluster routes traffic to all endpoints Local preserves the source IP of the traffic by routing only to endpoints on the same node as the traffic was received on dropping the traffic if there are no local endpoints client ipaddress L7 The solution is to get the real client IPaddress from the X Forward For HTTP header https developer mozilla org en US docs Web HTTP Headers X Forwarded For Example If your application pod behind Ingress NGINX controller uses the NGINX webserver and the reverseproxy inside it then you can do the following to preserve the remote client IP First you need to make sure that the X Forwarded For header reaches the backend pod This is done by using a Ingress NGINX conftroller ConfigMap key Its documented here https kubernetes github io ingress nginx user guide nginx configuration configmap use forwarded headers Next edit nginx conf file inside your app pod to contain the directives shown below set real ip from 0 0 0 0 0 Trust all IPs use your VPC CIDR block in production real ip header X Forwarded For real ip recursive on log format main remote addr remote user time local request status body bytes sent http referer http user agent host host x forwarded for http x forwarded for access log var log nginx access log main Kubernetes v1 22 Migration If you are using Ingress objects in your cluster running Kubernetes older than version 1 22 and you plan to upgrade your Kubernetes version to K8S 1 22 or above then please read the migration guide here user guide k8s 122 migration md Validation Of path For improving security and also following desired standards on Kubernetes API spec the next release scheduled for v1 8 0 will include a new optional feature of validating the value for the key ingress spec rules http paths path This behavior will be disabled by default on the 1 8 0 release and enabled by default on the next breaking change release set for 2 0 0 When ingress spec rules http pathType Exact or pathType Prefix this validation will limit the characters accepted on the field ingress spec rules http paths path to alphanumeric characters and Also in this case the path should start with When the ingress resource path contains other characters like on rewrite configurations the pathType value should be ImplementationSpecific API Spec on pathType is documented here https kubernetes io docs concepts services networking ingress path types When this option is enabled the validation will happen on the Admission Webhook So if any new ingress object contains characters other than alphanumeric characters and in the path field but is not using pathType value as ImplementationSpecific then the ingress object will be denied admission The cluster admin should establish validation rules using mechanisms like Open Policy Agent to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used The configmap value is here https kubernetes github io ingress nginx user guide nginx configuration configmap strict validate path type A complete example of an Openpolicyagent gatekeeper rule is available here https kubernetes github io ingress nginx examples openpolicyagent If you have any issues or concerns please do one of the following Open a GitHub issue Comment in our Dev Slack Channel Open a thread in our Google Group ingress nginx dev kubernetes io Why is chunking not working since controller v1 10 If your code is setting the HTTP header Transfer Encoding chunked and the controller log messages show an error about duplicate header it is because of this change http hg nginx org nginx rev 2bf7792c262e More details are available in this issue https github com kubernetes ingress nginx issues 11162 |
ingress nginx TLS Secrets warning Anytime we reference a TLS secret we mean a PEM encoded X 509 RSA 2048 secret You can generate a self signed certificate and private key with TLS HTTPS Ensure that the certificate order is leaf intermediate root otherwise the controller will not be able to import the certificate and you ll see this error in the logs | # TLS/HTTPS
## TLS Secrets
Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.
!!! warning
Ensure that the certificate order is leaf->intermediate->root, otherwise the controller will not be able to import the certificate, and you'll see this error in the logs ```W1012 09:15:45.920000 6 backend_ssl.go:46] Error obtaining X.509 certificate: unexpected error creating SSL Cert: certificate and private key does not have a matching public key: tls: private key does not match public key```
You can generate a self-signed certificate and private key with:
```bash
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj "/CN=${HOST}/O=${HOST}" -addext "subjectAltName = DNS:${HOST}"
```
Then create the secret in the cluster via:
```bash
kubectl create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE}
```
The resulting secret will be of type `kubernetes.io/tls`.
## Host names
Ensure that the relevant [ingress rules specify a matching hostname](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls).
## Default SSL Certificate
NGINX provides the option to configure a server as a catch-all with
[server_name](https://nginx.org/en/docs/http/server_names.html)
for requests that do not match any of the configured server names.
This configuration works out-of-the-box for HTTP traffic.
For HTTPS, a certificate is naturally required.
For this reason the Ingress controller provides the flag `--default-ssl-certificate`.
The secret referred to by this flag contains the default certificate to be used when
accessing the catch-all server.
If this flag is not provided NGINX will use a self-signed certificate.
For instance, if you have a TLS secret `foo-tls` in the `default` namespace,
add `--default-ssl-certificate=default/foo-tls` in the `nginx-controller` deployment.
If the `tls:` section is not set, NGINX will provide the default certificate but will not force HTTPS redirect.
On the other hand, if the `tls:` section is set - even without specifying a `secretName` option - NGINX will force HTTPS redirect.
To force redirects for Ingresses that do not specify a TLS-block at all, take a look at `force-ssl-redirect` in [ConfigMap][ConfigMap].
## SSL Passthrough
The [`--enable-ssl-passthrough`](cli-arguments.md) flag enables the SSL Passthrough feature, which is disabled by
default. This is required to enable passthrough backends in Ingress objects.
!!! warning
This feature is implemented by intercepting **all traffic** on the configured HTTPS port (default: 443) and handing
it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.
SSL Passthrough leverages [SNI][SNI] and reads the virtual domain from the TLS negotiation, which requires compatible
clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back
and forth between the backend and the client.
If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured
passthrough proxy port (default: 442), which proxies the request to the default backend.
!!! note
Unlike HTTP backends, traffic to Passthrough backends is sent to the *clusterIP* of the backing Service instead of
individual Endpoints.
## HTTP Strict Transport Security
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified
through the use of a special response header. Once a supported browser receives
this header that browser will prevent any communications from being sent over
HTTP to the specified domain and will instead send all communications over HTTPS.
HSTS is enabled by default.
To disable this behavior use `hsts: "false"` in the configuration [ConfigMap][ConfigMap].
## Server-side HTTPS enforcement through redirect
By default the controller redirects HTTP clients to the HTTPS port
443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.
This can be disabled globally using `ssl-redirect: "false"` in the NGINX [config map][ConfigMap],
or per-Ingress with the `nginx.ingress.kubernetes.io/ssl-redirect: "false"`
annotation in the particular resource.
!!! tip
When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a
redirect to HTTPS even when there is no TLS certificate available.
This can be achieved by using the `nginx.ingress.kubernetes.io/force-ssl-redirect: "true"`
annotation in the particular resource.
## Automated Certificate Management with cert-manager
[cert-manager] automatically requests missing or expired certificates from a range of
[supported issuers][cert-manager-issuer-config] (including [Let's Encrypt]) by monitoring
ingress resources.
To set up cert-manager you should take a look at this [full example][full-cert-manager-example].
To enable it for an ingress resource you have to deploy cert-manager, configure a certificate
issuer update the manifest:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-demo
annotations:
cert-manager.io/issuer: "letsencrypt-staging" # Replace this with a production issuer once you've tested it
[..]
spec:
tls:
- hosts:
- ingress-demo.example.com
secretName: ingress-demo-tls
[...]
```
## Default TLS Version and Ciphers
To provide the most secure baseline configuration possible,
ingress-nginx defaults to using TLS 1.2 and 1.3 only, with a [secure set of TLS ciphers][ssl-ciphers].
### Legacy TLS
The default configuration, though secure, does not support some older browsers and operating systems.
For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing,
May 2018, [approximately 15% of Android devices](https://developer.android.com/about/dashboards/#Platform)
are not compatible with ingress-nginx's default configuration.
To change this default behavior, use a [ConfigMap][ConfigMap].
A sample ConfigMap fragment to allow these older clients to connect could look something like the following
(generated using the Mozilla SSL Configuration Generator)[mozilla-ssl-config-old]:
```
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA"
ssl-protocols: "TLSv1.2 TLSv1.3"
```
[Let's Encrypt]:https://letsencrypt.org
[ConfigMap]: ./nginx-configuration/configmap.md
[ssl-ciphers]: ./nginx-configuration/configmap.md#ssl-ciphers
[SNI]: https://en.wikipedia.org/wiki/Server_Name_Indication
[mozilla-ssl-config-old]: https://ssl-config.mozilla.org/#server=nginx&config=old
[cert-manager]: https://github.com/jetstack/cert-manager/
[full-cert-manager-example]:https://cert-manager.io/docs/tutorials/acme/nginx-ingress/
[cert-manager-issuer-config]:https://cert-manager.io/docs/configuration/ | ingress nginx | TLS HTTPS TLS Secrets Anytime we reference a TLS secret we mean a PEM encoded X 509 RSA 2048 secret warning Ensure that the certificate order is leaf intermediate root otherwise the controller will not be able to import the certificate and you ll see this error in the logs W1012 09 15 45 920000 6 backend ssl go 46 Error obtaining X 509 certificate unexpected error creating SSL Cert certificate and private key does not have a matching public key tls private key does not match public key You can generate a self signed certificate and private key with bash openssl req x509 nodes days 365 newkey rsa 2048 keyout KEY FILE out CERT FILE subj CN HOST O HOST addext subjectAltName DNS HOST Then create the secret in the cluster via bash kubectl create secret tls CERT NAME key KEY FILE cert CERT FILE The resulting secret will be of type kubernetes io tls Host names Ensure that the relevant ingress rules specify a matching hostname https kubernetes io docs concepts services networking ingress tls Default SSL Certificate NGINX provides the option to configure a server as a catch all with server name https nginx org en docs http server names html for requests that do not match any of the configured server names This configuration works out of the box for HTTP traffic For HTTPS a certificate is naturally required For this reason the Ingress controller provides the flag default ssl certificate The secret referred to by this flag contains the default certificate to be used when accessing the catch all server If this flag is not provided NGINX will use a self signed certificate For instance if you have a TLS secret foo tls in the default namespace add default ssl certificate default foo tls in the nginx controller deployment If the tls section is not set NGINX will provide the default certificate but will not force HTTPS redirect On the other hand if the tls section is set even without specifying a secretName option NGINX will force HTTPS redirect To force redirects for Ingresses that do not specify a TLS block at all take a look at force ssl redirect in ConfigMap ConfigMap SSL Passthrough The enable ssl passthrough cli arguments md flag enables the SSL Passthrough feature which is disabled by default This is required to enable passthrough backends in Ingress objects warning This feature is implemented by intercepting all traffic on the configured HTTPS port default 443 and handing it over to a local TCP proxy This bypasses NGINX completely and introduces a non negligible performance penalty SSL Passthrough leverages SNI SNI and reads the virtual domain from the TLS negotiation which requires compatible clients After a connection has been accepted by the TLS listener it is handled by the controller itself and piped back and forth between the backend and the client If there is no hostname matching the requested host name the request is handed over to NGINX on the configured passthrough proxy port default 442 which proxies the request to the default backend note Unlike HTTP backends traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints HTTP Strict Transport Security HTTP Strict Transport Security HSTS is an opt in security enhancement specified through the use of a special response header Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS HSTS is enabled by default To disable this behavior use hsts false in the configuration ConfigMap ConfigMap Server side HTTPS enforcement through redirect By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress This can be disabled globally using ssl redirect false in the NGINX config map ConfigMap or per Ingress with the nginx ingress kubernetes io ssl redirect false annotation in the particular resource tip When using SSL offloading outside of cluster e g AWS ELB it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available This can be achieved by using the nginx ingress kubernetes io force ssl redirect true annotation in the particular resource Automated Certificate Management with cert manager cert manager automatically requests missing or expired certificates from a range of supported issuers cert manager issuer config including Let s Encrypt by monitoring ingress resources To set up cert manager you should take a look at this full example full cert manager example To enable it for an ingress resource you have to deploy cert manager configure a certificate issuer update the manifest yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress demo annotations cert manager io issuer letsencrypt staging Replace this with a production issuer once you ve tested it spec tls hosts ingress demo example com secretName ingress demo tls Default TLS Version and Ciphers To provide the most secure baseline configuration possible ingress nginx defaults to using TLS 1 2 and 1 3 only with a secure set of TLS ciphers ssl ciphers Legacy TLS The default configuration though secure does not support some older browsers and operating systems For instance TLS 1 1 is only enabled by default from Android 5 0 on At the time of writing May 2018 approximately 15 of Android devices https developer android com about dashboards Platform are not compatible with ingress nginx s default configuration To change this default behavior use a ConfigMap ConfigMap A sample ConfigMap fragment to allow these older clients to connect could look something like the following generated using the Mozilla SSL Configuration Generator mozilla ssl config old kind ConfigMap apiVersion v1 metadata name nginx config data ssl ciphers ECDHE ECDSA AES128 GCM SHA256 ECDHE RSA AES128 GCM SHA256 ECDHE ECDSA AES256 GCM SHA384 ECDHE RSA AES256 GCM SHA384 ECDHE ECDSA CHACHA20 POLY1305 ECDHE RSA CHACHA20 POLY1305 DHE RSA AES128 GCM SHA256 DHE RSA AES256 GCM SHA384 DHE RSA CHACHA20 POLY1305 ECDHE ECDSA AES128 SHA256 ECDHE RSA AES128 SHA256 ECDHE ECDSA AES128 SHA ECDHE RSA AES128 SHA ECDHE ECDSA AES256 SHA384 ECDHE RSA AES256 SHA384 ECDHE ECDSA AES256 SHA ECDHE RSA AES256 SHA DHE RSA AES128 SHA256 DHE RSA AES256 SHA256 AES128 GCM SHA256 AES256 GCM SHA384 AES128 SHA256 AES256 SHA256 AES128 SHA AES256 SHA DES CBC3 SHA ssl protocols TLSv1 2 TLSv1 3 Let s Encrypt https letsencrypt org ConfigMap nginx configuration configmap md ssl ciphers nginx configuration configmap md ssl ciphers SNI https en wikipedia org wiki Server Name Indication mozilla ssl config old https ssl config mozilla org server nginx config old cert manager https github com jetstack cert manager full cert manager example https cert manager io docs tutorials acme nginx ingress cert manager issuer config https cert manager io docs configuration |
ingress nginx FAQ Migration to Kubernetes 1 22 and apiVersion Please read this If you are using Ingress objects in your cluster running Kubernetes older than v1 22 What is an IngressClass and why is it important for users of ingress nginx controller now and you plan to upgrade to Kubernetes v1 22 this page is relevant to you | # FAQ - Migration to Kubernetes 1.22 and apiVersion `networking.k8s.io/v1`
If you are using Ingress objects in your cluster (running Kubernetes older than v1.22),
and you plan to upgrade to Kubernetes v1.22, this page is relevant to you.
- Please read this [official blog on deprecated Ingress API versions](https://kubernetes.io/blog/2021/07/26/update-with-ingress-nginx/)
- Please read this [official documentation on the IngressClass object](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class)
## What is an IngressClass and why is it important for users of ingress-nginx controller now?
IngressClass is a Kubernetes resource. See the description below.
It's important because until now, a default install of the ingress-nginx controller did not require a IngressClass object.
From version 1.0.0 of the ingress-nginx controller, an IngressClass object is required.
On clusters with more than one instance of the ingress-nginx controller, all instances of the controllers must be aware of which Ingress objects they serve.
The `ingressClassName` field of an Ingress is the way to let the controller know about that.
```console
kubectl explain ingressclass
```
```
KIND: IngressClass
VERSION: networking.k8s.io/v1
DESCRIPTION:
IngressClass represents the class of the Ingress, referenced by the Ingress
Spec. The `ingressclass.kubernetes.io/is-default-class` annotation can be
used to indicate that an IngressClass should be considered default. When a
single IngressClass resource has this annotation set to true, new Ingress
resources without a class specified will be assigned this default class.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
Spec is the desired state of the IngressClass. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status`
```
## What has caused this change in behavior?
There are 2 primary reasons.
### Reason 1
Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as:
- `extensions/v1beta1`
- `networking.k8s.io/v1beta1`
You would get a message about deprecation, but the Ingress resource would get created.
From K8s version 1.22 onwards, you can **only** access the Ingress API via the stable, `networking.k8s.io/v1` API.
The reason is explained in the [official blog on deprecated ingress API versions](https://kubernetes.io/blog/2021/07/26/update-with-ingress-nginx/).
### Reason #2
If you are already using the ingress-nginx controller and then upgrade to Kubernetes 1.22,
there are several scenarios where your existing Ingress objects will not work how you expect.
Read this FAQ to check which scenario matches your use case.
## What is the `ingressClassName` field?
`ingressClassName` is a field in the spec of an Ingress object.
```shell
kubectl explain ingress.spec.ingressClassName
```
```console
KIND: Ingress
VERSION: networking.k8s.io/v1
FIELD: ingressClassName <string>
DESCRIPTION:
IngressClassName is the name of the IngressClass cluster resource. The
associated IngressClass defines which controller will implement the
resource. This replaces the deprecated `kubernetes.io/ingress.class`
annotation. For backwards compatibility, when that annotation is set, it
must be given precedence over this field. The controller may emit a warning
if the field and annotation have different values. Implementations of this
API should ignore Ingresses without a class specified. An IngressClass
resource may be marked as default, which can be used to set a default value
for this field. For more information, refer to the IngressClass
documentation.
```
The `.spec.ingressClassName` behavior has precedence over the deprecated `kubernetes.io/ingress.class` annotation.
## I have only one ingress controller in my cluster. What should I do?
If a single instance of the ingress-nginx controller is the sole Ingress controller running in your cluster,
you should add the annotation "ingressclass.kubernetes.io/is-default-class" in your IngressClass,
so any new Ingress objects will have this one as default IngressClass.
When using Helm, you can enable this annotation by setting `.controller.ingressClassResource.default: true` in your Helm chart installation's values file.
If you have any old Ingress objects remaining without an IngressClass set, you can do one or more of the following to make the ingress-nginx controller aware of the old objects:
- You can manually set the [`.spec.ingressClassName`](https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec) field in the manifest of your own Ingress resources.
- You can re-create them after setting the `ingressclass.kubernetes.io/is-default-class` annotation to `true` on the IngressClass
- Alternatively you can make the ingress-nginx controller watch Ingress objects without the ingressClassName field set by starting your ingress-nginx with the flag [--watch-ingress-without-class=true](#what-is-the-flag-watch-ingress-without-class).
When using Helm, you can configure your Helm chart installation's values file with `.controller.watchIngressWithoutClass: true`.
We recommend that you create the IngressClass as shown below:
```
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
name: nginx
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: k8s.io/ingress-nginx
```
and add the value `spec.ingressClassName=nginx` in your Ingress objects.
## I have many ingress objects in my cluster. What should I do?
If you have a lot of ingress objects without ingressClass configuration,
you can run the ingress controller with the flag `--watch-ingress-without-class=true`.
### What is the flag `--watch-ingress-without-class`?
It's a flag that is passed, as an argument, to the `nginx-ingress-controller` executable.
In the configuration, it looks like this:
```yaml
# ...
args:
- /nginx-ingress-controller
- --watch-ingress-without-class=true
- --controller-class=k8s.io/ingress-nginx
# ...
# ...
```
## I have more than one controller in my cluster, and I'm already using the annotation
No problem. This should still keep working, but we highly recommend you to test!
Even though `kubernetes.io/ingress.class` is deprecated, the ingress-nginx controller still understands that annotation.
If you want to follow good practice, you should consider migrating to use IngressClass and `.spec.ingressClassName`.
## I have more than one controller running in my cluster, and I want to use the new API
In this scenario, you need to create multiple IngressClasses (see the example above).
Be aware that IngressClass works in a very specific way: you will need to change the `.spec.controller` value in your IngressClass and configure the controller to expect the exact same value.
Let's see an example, supposing that you have three IngressClasses:
- IngressClass `ingress-nginx-one`, with `.spec.controller` equal to `example.com/ingress-nginx1`
- IngressClass `ingress-nginx-two`, with `.spec.controller` equal to `example.com/ingress-nginx2`
- IngressClass `ingress-nginx-three`, with `.spec.controller` equal to `example.com/ingress-nginx1`
For private use, you can also use a controller name that doesn't contain a `/`, e.g. `ingress-nginx1`.
When deploying your ingress controllers, you will have to change the `--controller-class` field as follows:
- Ingress-Nginx A, configured to use controller class name `example.com/ingress-nginx1`
- Ingress-Nginx B, configured to use controller class name `example.com/ingress-nginx2`
When you create an Ingress object with its `ingressClassName` set to `ingress-nginx-two`,
only controllers looking for the `example.com/ingress-nginx2` controller class pay attention to the new object.
Given that Ingress-Nginx B is set up that way, it will serve that object, whereas Ingress-Nginx A ignores the new Ingress.
Bear in mind that if you start Ingress-Nginx B with the command line argument `--watch-ingress-without-class=true`, it will serve:
1. Ingresses without any `ingressClassName` set
2. Ingresses where the deprecated annotation (`kubernetes.io/ingress.class`) matches the value set in the command line argument `--ingress-class`
3. Ingresses that refer to any IngressClass that has the same `spec.controller` as configured in `--controller-class`
4. If you start Ingress-Nginx B with the command line argument `--watch-ingress-without-class=true` and you run Ingress-Nginx A with the command line argument `--watch-ingress-without-class=false` then this is a supported configuration.
If you have two ingress-nginx controllers for the same cluster, both running with `--watch-ingress-without-class=true` then there is likely to be a conflict.
## Why am I seeing "ingress class annotation is not equal to the expected by Ingress Controller" in my controller logs?
It is highly likely that you will also see the name of the ingress resource in the same error message.
This error message has been observed on use the deprecated annotation (`kubernetes.io/ingress.class`) in an Ingress resource manifest.
It is recommended to use the `.spec.ingressClassName` field of the Ingress resource, to specify the name of the IngressClass of the Ingress you are defining. | ingress nginx | FAQ Migration to Kubernetes 1 22 and apiVersion networking k8s io v1 If you are using Ingress objects in your cluster running Kubernetes older than v1 22 and you plan to upgrade to Kubernetes v1 22 this page is relevant to you Please read this official blog on deprecated Ingress API versions https kubernetes io blog 2021 07 26 update with ingress nginx Please read this official documentation on the IngressClass object https kubernetes io docs concepts services networking ingress ingress class What is an IngressClass and why is it important for users of ingress nginx controller now IngressClass is a Kubernetes resource See the description below It s important because until now a default install of the ingress nginx controller did not require a IngressClass object From version 1 0 0 of the ingress nginx controller an IngressClass object is required On clusters with more than one instance of the ingress nginx controller all instances of the controllers must be aware of which Ingress objects they serve The ingressClassName field of an Ingress is the way to let the controller know about that console kubectl explain ingressclass KIND IngressClass VERSION networking k8s io v1 DESCRIPTION IngressClass represents the class of the Ingress referenced by the Ingress Spec The ingressclass kubernetes io is default class annotation can be used to indicate that an IngressClass should be considered default When a single IngressClass resource has this annotation set to true new Ingress resources without a class specified will be assigned this default class FIELDS apiVersion string APIVersion defines the versioned schema of this representation of an object Servers should convert recognized schemas to the latest internal value and may reject unrecognized values More info https git k8s io community contributors devel sig architecture api conventions md resources kind string Kind is a string value representing the REST resource this object represents Servers may infer this from the endpoint the client submits requests to Cannot be updated In CamelCase More info https git k8s io community contributors devel sig architecture api conventions md types kinds metadata Object Standard object s metadata More info https git k8s io community contributors devel sig architecture api conventions md metadata spec Object Spec is the desired state of the IngressClass More info https git k8s io community contributors devel sig architecture api conventions md spec and status What has caused this change in behavior There are 2 primary reasons Reason 1 Until K8s version 1 21 it was possible to create an Ingress resource using deprecated versions of the Ingress API such as extensions v1beta1 networking k8s io v1beta1 You would get a message about deprecation but the Ingress resource would get created From K8s version 1 22 onwards you can only access the Ingress API via the stable networking k8s io v1 API The reason is explained in the official blog on deprecated ingress API versions https kubernetes io blog 2021 07 26 update with ingress nginx Reason 2 If you are already using the ingress nginx controller and then upgrade to Kubernetes 1 22 there are several scenarios where your existing Ingress objects will not work how you expect Read this FAQ to check which scenario matches your use case What is the ingressClassName field ingressClassName is a field in the spec of an Ingress object shell kubectl explain ingress spec ingressClassName console KIND Ingress VERSION networking k8s io v1 FIELD ingressClassName string DESCRIPTION IngressClassName is the name of the IngressClass cluster resource The associated IngressClass defines which controller will implement the resource This replaces the deprecated kubernetes io ingress class annotation For backwards compatibility when that annotation is set it must be given precedence over this field The controller may emit a warning if the field and annotation have different values Implementations of this API should ignore Ingresses without a class specified An IngressClass resource may be marked as default which can be used to set a default value for this field For more information refer to the IngressClass documentation The spec ingressClassName behavior has precedence over the deprecated kubernetes io ingress class annotation I have only one ingress controller in my cluster What should I do If a single instance of the ingress nginx controller is the sole Ingress controller running in your cluster you should add the annotation ingressclass kubernetes io is default class in your IngressClass so any new Ingress objects will have this one as default IngressClass When using Helm you can enable this annotation by setting controller ingressClassResource default true in your Helm chart installation s values file If you have any old Ingress objects remaining without an IngressClass set you can do one or more of the following to make the ingress nginx controller aware of the old objects You can manually set the spec ingressClassName https kubernetes io docs reference kubernetes api service resources ingress v1 IngressSpec field in the manifest of your own Ingress resources You can re create them after setting the ingressclass kubernetes io is default class annotation to true on the IngressClass Alternatively you can make the ingress nginx controller watch Ingress objects without the ingressClassName field set by starting your ingress nginx with the flag watch ingress without class true what is the flag watch ingress without class When using Helm you can configure your Helm chart installation s values file with controller watchIngressWithoutClass true We recommend that you create the IngressClass as shown below apiVersion networking k8s io v1 kind IngressClass metadata labels app kubernetes io component controller name nginx annotations ingressclass kubernetes io is default class true spec controller k8s io ingress nginx and add the value spec ingressClassName nginx in your Ingress objects I have many ingress objects in my cluster What should I do If you have a lot of ingress objects without ingressClass configuration you can run the ingress controller with the flag watch ingress without class true What is the flag watch ingress without class It s a flag that is passed as an argument to the nginx ingress controller executable In the configuration it looks like this yaml args nginx ingress controller watch ingress without class true controller class k8s io ingress nginx I have more than one controller in my cluster and I m already using the annotation No problem This should still keep working but we highly recommend you to test Even though kubernetes io ingress class is deprecated the ingress nginx controller still understands that annotation If you want to follow good practice you should consider migrating to use IngressClass and spec ingressClassName I have more than one controller running in my cluster and I want to use the new API In this scenario you need to create multiple IngressClasses see the example above Be aware that IngressClass works in a very specific way you will need to change the spec controller value in your IngressClass and configure the controller to expect the exact same value Let s see an example supposing that you have three IngressClasses IngressClass ingress nginx one with spec controller equal to example com ingress nginx1 IngressClass ingress nginx two with spec controller equal to example com ingress nginx2 IngressClass ingress nginx three with spec controller equal to example com ingress nginx1 For private use you can also use a controller name that doesn t contain a e g ingress nginx1 When deploying your ingress controllers you will have to change the controller class field as follows Ingress Nginx A configured to use controller class name example com ingress nginx1 Ingress Nginx B configured to use controller class name example com ingress nginx2 When you create an Ingress object with its ingressClassName set to ingress nginx two only controllers looking for the example com ingress nginx2 controller class pay attention to the new object Given that Ingress Nginx B is set up that way it will serve that object whereas Ingress Nginx A ignores the new Ingress Bear in mind that if you start Ingress Nginx B with the command line argument watch ingress without class true it will serve 1 Ingresses without any ingressClassName set 2 Ingresses where the deprecated annotation kubernetes io ingress class matches the value set in the command line argument ingress class 3 Ingresses that refer to any IngressClass that has the same spec controller as configured in controller class 4 If you start Ingress Nginx B with the command line argument watch ingress without class true and you run Ingress Nginx A with the command line argument watch ingress without class false then this is a supported configuration If you have two ingress nginx controllers for the same cluster both running with watch ingress without class true then there is likely to be a conflict Why am I seeing ingress class annotation is not equal to the expected by Ingress Controller in my controller logs It is highly likely that you will also see the name of the ingress resource in the same error message This error message has been observed on use the deprecated annotation kubernetes io ingress class in an Ingress resource manifest It is recommended to use the spec ingressClassName field of the Ingress resource to specify the name of the IngressClass of the Ingress you are defining |
ingress nginx note important Regular Expression Support Please see the for Validation Of Ingress Path Matching Regular expressions is not supported in the field The wildcard character must appear by itself as the first DNS label and matches only a single label You cannot have a wildcard label by itself e g Host | # Ingress Path Matching
## Regular Expression Support
!!! important
Regular expressions is not supported in the `spec.rules.host` field. The wildcard character '\*' must appear by itself as the first DNS label and matches only a single label. You cannot have a wildcard label by itself (e.g. Host == "\*").
!!! note
Please see the [FAQ](../faq.md#validation-of-path) for Validation Of __`path`__
The ingress controller supports **case insensitive** regular expressions in the `spec.rules.http.paths.path` field.
This can be enabled by setting the `nginx.ingress.kubernetes.io/use-regex` annotation to `true` (the default is false).
!!! hint
Kubernetes only accept expressions that comply with the RE2 engine syntax. It is possible that valid expressions accepted by NGINX cannot be used with ingress-nginx, because the PCRE library (used in NGINX) supports a wider syntax than RE2.
See the [RE2 Syntax](https://github.com/google/re2/wiki/Syntax) documentation for differences.
See the [description](./nginx-configuration/annotations.md#use-regex) of the `use-regex` annotation for more details.
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
rules:
- host: test.com
http:
paths:
- path: /foo/.*
pathType: ImplementationSpecific
backend:
service:
name: test
port:
number: 80
```
The preceding ingress definition would translate to the following location block within the NGINX configuration for the `test.com` server:
```txt
location ~* "^/foo/.*" {
...
}
```
## Path Priority
In NGINX, regular expressions follow a **first match** policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.
**Please read the [warning](#warning) before using regular expressions in your ingress definitions.**
### Example
Let the following two ingress definitions be created:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress-1
spec:
ingressClassName: nginx
rules:
- host: test.com
http:
paths:
- path: /foo/bar
pathType: Prefix
backend:
service:
name: service1
port:
number: 80
- path: /foo/bar/
pathType: Prefix
backend:
service:
name: service2
port:
number: 80
```
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress-2
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx
rules:
- host: test.com
http:
paths:
- path: /foo/bar/(.+)
pathType: ImplementationSpecific
backend:
service:
name: service3
port:
number: 80
```
The ingress controller would define the following location blocks, in order of descending length, within the NGINX template for the `test.com` server:
```txt
location ~* ^/foo/bar/.+ {
...
}
location ~* "^/foo/bar/" {
...
}
location ~* "^/foo/bar" {
...
}
```
The following request URI's would match the corresponding location blocks:
- `test.com/foo/bar/1` matches `~* ^/foo/bar/.+` and will go to service 3.
- `test.com/foo/bar/` matches `~* ^/foo/bar/` and will go to service 2.
- `test.com/foo/bar` matches `~* ^/foo/bar` and will go to service 1.
**IMPORTANT NOTES**:
- If the `use-regex` OR `rewrite-target` annotation is used on any Ingress for a given host, then the case insensitive regular expression [location modifier](https://nginx.org/en/docs/http/ngx_http_core_module.html#location) will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
## Warning
The following example describes a case that may inflict unwanted path matching behavior.
This case is expected and a result of NGINX's a first match policy for paths that use the regular expression [location modifier](https://nginx.org/en/docs/http/ngx_http_core_module.html#location). For more information about how a path is chosen, please read the following article: ["Understanding Nginx Server and Location Block Selection Algorithms"](https://www.digitalocean.com/community/tutorials/understanding-nginx-server-and-location-block-selection-algorithms).
### Example
Let the following ingress be defined:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress-3
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
rules:
- host: test.com
http:
paths:
- path: /foo/bar/bar
pathType: Prefix
backend:
service:
name: test
port:
number: 80
- path: /foo/bar/[A-Z0-9]{3}
pathType: ImplementationSpecific
backend:
service:
name: test
port:
number: 80
```
The ingress controller would define the following location blocks (in this order) within the NGINX template for the `test.com` server:
```txt
location ~* "^/foo/bar/[A-Z0-9]{3}" {
...
}
location ~* "^/foo/bar/bar" {
...
}
```
A request to `test.com/foo/bar/bar` would match the `^/foo/bar/[A-Z0-9]{3}` location block instead of the longest EXACT matching path. | ingress nginx | Ingress Path Matching Regular Expression Support important Regular expressions is not supported in the spec rules host field The wildcard character must appear by itself as the first DNS label and matches only a single label You cannot have a wildcard label by itself e g Host note Please see the FAQ faq md validation of path for Validation Of path The ingress controller supports case insensitive regular expressions in the spec rules http paths path field This can be enabled by setting the nginx ingress kubernetes io use regex annotation to true the default is false hint Kubernetes only accept expressions that comply with the RE2 engine syntax It is possible that valid expressions accepted by NGINX cannot be used with ingress nginx because the PCRE library used in NGINX supports a wider syntax than RE2 See the RE2 Syntax https github com google re2 wiki Syntax documentation for differences See the description nginx configuration annotations md use regex of the use regex annotation for more details yaml apiVersion networking k8s io v1 kind Ingress metadata name test ingress annotations nginx ingress kubernetes io use regex true spec ingressClassName nginx rules host test com http paths path foo pathType ImplementationSpecific backend service name test port number 80 The preceding ingress definition would translate to the following location block within the NGINX configuration for the test com server txt location foo Path Priority In NGINX regular expressions follow a first match policy In order to enable more accurate path matching ingress nginx first orders the paths by descending length before writing them to the NGINX template as location blocks Please read the warning warning before using regular expressions in your ingress definitions Example Let the following two ingress definitions be created yaml apiVersion networking k8s io v1 kind Ingress metadata name test ingress 1 spec ingressClassName nginx rules host test com http paths path foo bar pathType Prefix backend service name service1 port number 80 path foo bar pathType Prefix backend service name service2 port number 80 yaml apiVersion networking k8s io v1 kind Ingress metadata name test ingress 2 annotations nginx ingress kubernetes io rewrite target 1 spec ingressClassName nginx rules host test com http paths path foo bar pathType ImplementationSpecific backend service name service3 port number 80 The ingress controller would define the following location blocks in order of descending length within the NGINX template for the test com server txt location foo bar location foo bar location foo bar The following request URI s would match the corresponding location blocks test com foo bar 1 matches foo bar and will go to service 3 test com foo bar matches foo bar and will go to service 2 test com foo bar matches foo bar and will go to service 1 IMPORTANT NOTES If the use regex OR rewrite target annotation is used on any Ingress for a given host then the case insensitive regular expression location modifier https nginx org en docs http ngx http core module html location will be enforced on ALL paths for a given host regardless of what Ingress they are defined on Warning The following example describes a case that may inflict unwanted path matching behavior This case is expected and a result of NGINX s a first match policy for paths that use the regular expression location modifier https nginx org en docs http ngx http core module html location For more information about how a path is chosen please read the following article Understanding Nginx Server and Location Block Selection Algorithms https www digitalocean com community tutorials understanding nginx server and location block selection algorithms Example Let the following ingress be defined yaml apiVersion networking k8s io v1 kind Ingress metadata name test ingress 3 annotations nginx ingress kubernetes io use regex true spec ingressClassName nginx rules host test com http paths path foo bar bar pathType Prefix backend service name test port number 80 path foo bar A Z0 9 3 pathType ImplementationSpecific backend service name test port number 80 The ingress controller would define the following location blocks in this order within the NGINX template for the test com server txt location foo bar A Z0 9 3 location foo bar bar A request to test com foo bar bar would match the foo bar A Z0 9 3 location block instead of the longest EXACT matching path |
ingress nginx The ingress nginx ingress controller can be used to directly expose servers Enabling FastCGI in your Ingress only requires setting the backend protocol annotation to and with a couple more annotations you can customize the way ingress nginx handles the communication with your FastCGI server For most practical use cases php applications are a good example PHP is not HTML so a FastCGI server like php fpm processes a index php script for the response to a request See a working example below FastCGI is a for interfacing interactive programs with a It s aim is to reduce the overhead related to interfacing between web server and CGI programs allowing a server to handle more web page requests per unit of time mdash Wikipedia Exposing FastCGI Servers | # Exposing FastCGI Servers
> **FastCGI** is a [binary protocol](https://en.wikipedia.org/wiki/Binary_protocol "Binary protocol") for interfacing interactive programs with a [web server](https://en.wikipedia.org/wiki/Web_server "Web server"). [...] (It's) aim is to reduce the overhead related to interfacing between web server and CGI programs, allowing a server to handle more web page requests per unit of time.
>
> — Wikipedia
The _ingress-nginx_ ingress controller can be used to directly expose [FastCGI](https://en.wikipedia.org/wiki/FastCGI) servers. Enabling FastCGI in your Ingress only requires setting the _backend-protocol_ annotation to `FCGI`, and with a couple more annotations you can customize the way _ingress-nginx_ handles the communication with your FastCGI _server_.
For most practical use-cases, php applications are a good example. PHP is not HTML so a FastCGI server like php-fpm processes a index.php script for the response to a request. See a working example below.
This [post in a FactCGI feature issue](https://github.com/kubernetes/ingress-nginx/issues/8207#issuecomment-2161405468) describes a test for the FastCGI feature. The same test is described below here.
## Example Objects to expose a FastCGI server pod
### The FasctCGI server pod
The _Pod_ object example below exposes port `9000`, which is the conventional FastCGI port.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: example-app
labels:
app: example-app
spec:
containers:
- name: example-app
image: php:fpm-alpine
ports:
- containerPort: 9000
name: fastcgi
```
- For this example to work, a HTML response should be received from the FastCGI server being exposed
- A HTTP request to the FastCGI server pod should be sent
- The response should be generated by a php script as that is what we are demonstrating here
The image we are using here `php:fpm-alpine` does not ship with a ready to use php script inside it. So we need to provide the image with a simple php-script for this example to work.
- Use `kubectl exec` to get into the example-app pod
- You will land at the path `/var/www/html`
- Create a simple php script there at the path /var/www/html called index.php
- Make the index.php file look like this
```
<!DOCTYPE html>
<html>
<head>
<title>PHP Test</title>
</head>
<body>
<?php echo '<p>FastCGI Test Worked!</p>'; ?>
</body>
</html>
```
- Save and exit from the shell in the pod
- If you delete the pod, then you will have to recreate the file as this method is not persistent
### The FastCGI service
The _Service_ object example below matches port `9000` from the _Pod_ object above.
```yaml
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example-app
ports:
- port: 9000
targetPort: 9000
name: fastcgi
```
### The configMap object and the ingress object
The _Ingress_ and _ConfigMap_ objects below demonstrate the supported _FastCGI_ specific annotations.
!!! Important
NGINX actually has 50 [FastCGI directives](https://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#directives)
All of the nginx directives have not been exposed in the ingress yet
### The ConfigMap object
This configMap object is required to set the parameters of [FastCGI directives](https://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#directives)
!!! Attention
- The _ConfigMap_ **must** be created before creating the ingress object
- The _Ingress Controller_ needs to find the configMap when the _Ingress_ object with the FastCGI annotations is created
- So create the configMap before the ingress
- If the configMap is created after the ingress is created, then you will need to restart the _Ingress Controller_ pods.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: example-cm
data:
SCRIPT_FILENAME: "/var/www/html/index.php"
```
### The ingress object
- Do not create the ingress shown below until you have created the configMap seen above.
- You can see that this ingress matches the service `example-service`, and the port named `fastcgi` from above.
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "FCGI"
nginx.ingress.kubernetes.io/fastcgi-index: "index.php"
nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-cm"
name: example-app
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-service
port:
name: fastcgi
```
## Send a request to the exposed FastCGI server
You will have to look at the external-ip of the ingress or you have to send the HTTP request to the ClusterIP address of the ingress-nginx controller pod.
```
% curl 172.19.0.2 -H "Host: app.example.com" -vik
* Trying 172.19.0.2:80...
* Connected to 172.19.0.2 (172.19.0.2) port 80
> GET / HTTP/1.1
> Host: app.example.com
> User-Agent: curl/8.6.0
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Date: Wed, 12 Jun 2024 07:11:59 GMT
Date: Wed, 12 Jun 2024 07:11:59 GMT
< Content-Type: text/html; charset=UTF-8
Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
Transfer-Encoding: chunked
< Connection: keep-alive
Connection: keep-alive
< X-Powered-By: PHP/8.3.8
X-Powered-By: PHP/8.3.8
<
<!DOCTYPE html>
<html>
<head>
<title>PHP Test</title>
</head>
<body>
<p>FastCGI Test Worked</p> </body>
</html>
```
## FastCGI Ingress Annotations
To enable FastCGI, the `nginx.ingress.kubernetes.io/backend-protocol` annotation needs to be set to `FCGI`, which overrides the default `HTTP` value.
> `nginx.ingress.kubernetes.io/backend-protocol: "FCGI"`
**This enables the _FastCGI_ mode for all paths defined in the _Ingress_ object**
### The `nginx.ingress.kubernetes.io/fastcgi-index` Annotation
To specify an index file, the `fastcgi-index` annotation value can optionally be set. In the example below, the value is set to `index.php`. This annotation corresponds to [the _NGINX_ `fastcgi_index` directive](https://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_index).
> `nginx.ingress.kubernetes.io/fastcgi-index: "index.php"`
### The `nginx.ingress.kubernetes.io/fastcgi-params-configmap` Annotation
To specify [_NGINX_ `fastcgi_param` directives](https://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_param), the `fastcgi-params-configmap` annotation is used, which in turn must lead to a _ConfigMap_ object containing the _NGINX_ `fastcgi_param` directives as key/values.
> `nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-configmap"`
And the _ConfigMap_ object to specify the `SCRIPT_FILENAME` and `HTTP_PROXY` _NGINX's_ `fastcgi_param` directives will look like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: example-configmap
data:
SCRIPT_FILENAME: "/example/index.php"
HTTP_PROXY: ""
```
Using the _namespace/_ prefix is also supported, for example:
> `nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-namespace/example-configmap"` | ingress nginx | Exposing FastCGI Servers FastCGI is a binary protocol https en wikipedia org wiki Binary protocol Binary protocol for interfacing interactive programs with a web server https en wikipedia org wiki Web server Web server It s aim is to reduce the overhead related to interfacing between web server and CGI programs allowing a server to handle more web page requests per unit of time mdash Wikipedia The ingress nginx ingress controller can be used to directly expose FastCGI https en wikipedia org wiki FastCGI servers Enabling FastCGI in your Ingress only requires setting the backend protocol annotation to FCGI and with a couple more annotations you can customize the way ingress nginx handles the communication with your FastCGI server For most practical use cases php applications are a good example PHP is not HTML so a FastCGI server like php fpm processes a index php script for the response to a request See a working example below This post in a FactCGI feature issue https github com kubernetes ingress nginx issues 8207 issuecomment 2161405468 describes a test for the FastCGI feature The same test is described below here Example Objects to expose a FastCGI server pod The FasctCGI server pod The Pod object example below exposes port 9000 which is the conventional FastCGI port yaml apiVersion v1 kind Pod metadata name example app labels app example app spec containers name example app image php fpm alpine ports containerPort 9000 name fastcgi For this example to work a HTML response should be received from the FastCGI server being exposed A HTTP request to the FastCGI server pod should be sent The response should be generated by a php script as that is what we are demonstrating here The image we are using here php fpm alpine does not ship with a ready to use php script inside it So we need to provide the image with a simple php script for this example to work Use kubectl exec to get into the example app pod You will land at the path var www html Create a simple php script there at the path var www html called index php Make the index php file look like this DOCTYPE html html head title PHP Test title head body php echo p FastCGI Test Worked p body html Save and exit from the shell in the pod If you delete the pod then you will have to recreate the file as this method is not persistent The FastCGI service The Service object example below matches port 9000 from the Pod object above yaml apiVersion v1 kind Service metadata name example service spec selector app example app ports port 9000 targetPort 9000 name fastcgi The configMap object and the ingress object The Ingress and ConfigMap objects below demonstrate the supported FastCGI specific annotations Important NGINX actually has 50 FastCGI directives https nginx org en docs http ngx http fastcgi module html directives All of the nginx directives have not been exposed in the ingress yet The ConfigMap object This configMap object is required to set the parameters of FastCGI directives https nginx org en docs http ngx http fastcgi module html directives Attention The ConfigMap must be created before creating the ingress object The Ingress Controller needs to find the configMap when the Ingress object with the FastCGI annotations is created So create the configMap before the ingress If the configMap is created after the ingress is created then you will need to restart the Ingress Controller pods yaml apiVersion v1 kind ConfigMap metadata name example cm data SCRIPT FILENAME var www html index php The ingress object Do not create the ingress shown below until you have created the configMap seen above You can see that this ingress matches the service example service and the port named fastcgi from above apiVersion networking k8s io v1 kind Ingress metadata annotations nginx ingress kubernetes io backend protocol FCGI nginx ingress kubernetes io fastcgi index index php nginx ingress kubernetes io fastcgi params configmap example cm name example app spec ingressClassName nginx rules host app example com http paths path pathType Prefix backend service name example service port name fastcgi Send a request to the exposed FastCGI server You will have to look at the external ip of the ingress or you have to send the HTTP request to the ClusterIP address of the ingress nginx controller pod curl 172 19 0 2 H Host app example com vik Trying 172 19 0 2 80 Connected to 172 19 0 2 172 19 0 2 port 80 GET HTTP 1 1 Host app example com User Agent curl 8 6 0 Accept HTTP 1 1 200 OK HTTP 1 1 200 OK Date Wed 12 Jun 2024 07 11 59 GMT Date Wed 12 Jun 2024 07 11 59 GMT Content Type text html charset UTF 8 Content Type text html charset UTF 8 Transfer Encoding chunked Transfer Encoding chunked Connection keep alive Connection keep alive X Powered By PHP 8 3 8 X Powered By PHP 8 3 8 DOCTYPE html html head title PHP Test title head body p FastCGI Test Worked p body html FastCGI Ingress Annotations To enable FastCGI the nginx ingress kubernetes io backend protocol annotation needs to be set to FCGI which overrides the default HTTP value nginx ingress kubernetes io backend protocol FCGI This enables the FastCGI mode for all paths defined in the Ingress object The nginx ingress kubernetes io fastcgi index Annotation To specify an index file the fastcgi index annotation value can optionally be set In the example below the value is set to index php This annotation corresponds to the NGINX fastcgi index directive https nginx org en docs http ngx http fastcgi module html fastcgi index nginx ingress kubernetes io fastcgi index index php The nginx ingress kubernetes io fastcgi params configmap Annotation To specify NGINX fastcgi param directives https nginx org en docs http ngx http fastcgi module html fastcgi param the fastcgi params configmap annotation is used which in turn must lead to a ConfigMap object containing the NGINX fastcgi param directives as key values nginx ingress kubernetes io fastcgi params configmap example configmap And the ConfigMap object to specify the SCRIPT FILENAME and HTTP PROXY NGINX s fastcgi param directives will look like the following yaml apiVersion v1 kind ConfigMap metadata name example configmap data SCRIPT FILENAME example index php HTTP PROXY Using the namespace prefix is also supported for example nginx ingress kubernetes io fastcgi params configmap example namespace example configmap |
ingress nginx Using IngressClasses Multiple Ingress controllers To fix this problem use The annotation is not being preferred or suggested to use as it can be deprecated in the future Better to use the field But when user has deployed with then the ingress class resource field is not used By default deploying multiple Ingress controllers e g will result in all controllers simultaneously racing to update Ingress status fields in confusing ways | # Multiple Ingress controllers
By default, deploying multiple Ingress controllers (e.g., `ingress-nginx` & `gce`) will result in all controllers simultaneously racing to update Ingress status fields in confusing ways.
To fix this problem, use [IngressClasses](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class). The `kubernetes.io/ingress.class` annotation is not being preferred or suggested to use as it can be deprecated in the future. Better to use the field `ingress.spec.ingressClassName`.
But, when user has deployed with `scope.enabled`, then the ingress class resource field is not used.
## Using IngressClasses
If all ingress controllers respect IngressClasses (e.g. multiple instances of ingress-nginx v1.0), you can deploy two Ingress controllers by granting them control over two different IngressClasses, then selecting one of the two IngressClasses with `ingressClassName`.
First, ensure the `--controller-class=` and `--ingress-class` are set to something different on each ingress controller, If your additional ingress controller is to be installed in a namespace, where there is/are one/more-than-one ingress-nginx-controller(s) already installed, then you need to specify a different unique `--election-id` for the new instance of the controller.
```yaml
# ingress-nginx Deployment/Statefulset
spec:
template:
spec:
containers:
- name: ingress-nginx-internal-controller
args:
- /nginx-ingress-controller
- '--election-id=ingress-controller-leader'
- '--controller-class=k8s.io/internal-ingress-nginx'
- '--ingress-class=k8s.io/internal-nginx'
...
```
Then use the same value in the IngressClass:
```yaml
# ingress-nginx IngressClass
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: internal-nginx
spec:
controller: k8s.io/internal-ingress-nginx
...
```
And refer to that IngressClass in your Ingress:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
ingressClassName: internal-nginx
...
```
or if installing with Helm:
```yaml
controller:
electionID: ingress-controller-leader
ingressClass: internal-nginx # default: nginx
ingressClassResource:
name: internal-nginx # default: nginx
enabled: true
default: false
controllerValue: "k8s.io/internal-ingress-nginx" # default: k8s.io/ingress-nginx
```
!!! important
When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default
`--controller-class` value (see `IsValid` method in `internal/ingress/annotations/class/main.go`), otherwise the class annotation becomes required.
If `--controller-class` is set to the default value of `k8s.io/ingress-nginx`, the controller will monitor Ingresses with no class annotation *and* Ingresses with annotation class set to `nginx`. Use a non-default value for `--controller-class`, to ensure that the controller only satisfied the specific class of Ingresses.
## Using the kubernetes.io/ingress.class annotation (in deprecation)
If you're running multiple ingress controllers where one or more do not support IngressClasses, you must specify the annotation `kubernetes.io/ingress.class: "nginx"` in all ingresses that you would like ingress-nginx to claim.
For instance,
```yaml
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "gce"
```
will target the GCE controller, forcing the Ingress-NGINX controller to ignore it, while an annotation like:
```yaml
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "nginx"
```
will target the Ingress-NGINX controller, forcing the GCE controller to ignore it.
You can change the value "nginx" to something else by setting the `--ingress-class` flag:
```yaml
spec:
template:
spec:
containers:
- name: ingress-nginx-internal-controller
args:
- /nginx-ingress-controller
- --ingress-class=internal-nginx
```
then setting the corresponding `kubernetes.io/ingress.class: "internal-nginx"` annotation on your Ingresses.
To reiterate, setting the annotation to any value which does not match a valid ingress class will force the Ingress-Nginx Controller to ignore your Ingress.
If you are only running a single Ingress-Nginx Controller, this can be achieved by setting the annotation to any value except "nginx" or an empty string.
Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller. | ingress nginx | Multiple Ingress controllers By default deploying multiple Ingress controllers e g ingress nginx gce will result in all controllers simultaneously racing to update Ingress status fields in confusing ways To fix this problem use IngressClasses https kubernetes io docs concepts services networking ingress ingress class The kubernetes io ingress class annotation is not being preferred or suggested to use as it can be deprecated in the future Better to use the field ingress spec ingressClassName But when user has deployed with scope enabled then the ingress class resource field is not used Using IngressClasses If all ingress controllers respect IngressClasses e g multiple instances of ingress nginx v1 0 you can deploy two Ingress controllers by granting them control over two different IngressClasses then selecting one of the two IngressClasses with ingressClassName First ensure the controller class and ingress class are set to something different on each ingress controller If your additional ingress controller is to be installed in a namespace where there is are one more than one ingress nginx controller s already installed then you need to specify a different unique election id for the new instance of the controller yaml ingress nginx Deployment Statefulset spec template spec containers name ingress nginx internal controller args nginx ingress controller election id ingress controller leader controller class k8s io internal ingress nginx ingress class k8s io internal nginx Then use the same value in the IngressClass yaml ingress nginx IngressClass apiVersion networking k8s io v1 kind IngressClass metadata name internal nginx spec controller k8s io internal ingress nginx And refer to that IngressClass in your Ingress yaml apiVersion networking k8s io v1 kind Ingress metadata name my ingress spec ingressClassName internal nginx or if installing with Helm yaml controller electionID ingress controller leader ingressClass internal nginx default nginx ingressClassResource name internal nginx default nginx enabled true default false controllerValue k8s io internal ingress nginx default k8s io ingress nginx important When running multiple ingress nginx controllers it will only process an unset class annotation if one of the controllers uses the default controller class value see IsValid method in internal ingress annotations class main go otherwise the class annotation becomes required If controller class is set to the default value of k8s io ingress nginx the controller will monitor Ingresses with no class annotation and Ingresses with annotation class set to nginx Use a non default value for controller class to ensure that the controller only satisfied the specific class of Ingresses Using the kubernetes io ingress class annotation in deprecation If you re running multiple ingress controllers where one or more do not support IngressClasses you must specify the annotation kubernetes io ingress class nginx in all ingresses that you would like ingress nginx to claim For instance yaml metadata name foo annotations kubernetes io ingress class gce will target the GCE controller forcing the Ingress NGINX controller to ignore it while an annotation like yaml metadata name foo annotations kubernetes io ingress class nginx will target the Ingress NGINX controller forcing the GCE controller to ignore it You can change the value nginx to something else by setting the ingress class flag yaml spec template spec containers name ingress nginx internal controller args nginx ingress controller ingress class internal nginx then setting the corresponding kubernetes io ingress class internal nginx annotation on your Ingresses To reiterate setting the annotation to any value which does not match a valid ingress class will force the Ingress Nginx Controller to ignore your Ingress If you are only running a single Ingress Nginx Controller this can be achieved by setting the annotation to any value except nginx or an empty string Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller |
ingress nginx Prometheus and Grafana installation using Pod Annotations This installs Prometheus and Grafana in the same namespace as NGINX Ingress Prometheus and Grafana installation using Service Monitors This installs Prometheus and Grafana in two different namespaces This is the preferred method and helm charts supports this by default This tutorial will show you how to install and for scraping the metrics of the Ingress Nginx Controller Monitoring Prometheus and Grafana installation using Pod Annotations Two different methods to install and configure Prometheus and Grafana are described in this doc | # Monitoring
Two different methods to install and configure Prometheus and Grafana are described in this doc.
* Prometheus and Grafana installation using Pod Annotations. This installs Prometheus and Grafana in the same namespace as NGINX Ingress
* Prometheus and Grafana installation using Service Monitors. This installs Prometheus and Grafana in two different namespaces. This is the preferred method, and helm charts supports this by default.
## Prometheus and Grafana installation using Pod Annotations
This tutorial will show you how to install [Prometheus](https://prometheus.io/) and [Grafana](https://grafana.com/) for scraping the metrics of the Ingress-Nginx Controller.
!!! important
This example uses `emptyDir` volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data.
### Before You Begin
- The Ingress-Nginx Controller should already be deployed according to the deployment instructions [here](../deploy/index.md).
- The controller should be configured for exporting metrics. This requires 3 configurations to the controller. These configurations are :
1. controller.metrics.enabled=true
2. controller.podAnnotations."prometheus.io/scrape"="true"
3. controller.podAnnotations."prometheus.io/port"="10254"
- The easiest way to configure the controller for metrics is via helm upgrade. Assuming you have installed the ingress-nginx controller as a helm release named ingress-nginx, then you can simply type the command shown below :
```
helm upgrade ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--set controller.metrics.enabled=true \
--set-string controller.podAnnotations."prometheus\.io/scrape"="true" \
--set-string controller.podAnnotations."prometheus\.io/port"="10254"
```
- You can validate that the controller is configured for metrics by looking at the values of the installed release, like this:
```
helm get values ingress-nginx --namespace ingress-nginx
```
- You should be able to see the values shown below:
```
..
controller:
metrics:
enabled: true
podAnnotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
..
```
- If you are **not using helm**, you will have to edit your manifests like this:
- Service manifest:
```
apiVersion: v1
kind: Service
..
spec:
ports:
- name: prometheus
port: 10254
targetPort: prometheus
..
```
- Deployment manifest:
```
apiVersion: v1
kind: Deployment
..
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
spec:
containers:
- name: controller
ports:
- name: prometheus
containerPort: 10254
..
```
### Deploy and configure Prometheus Server
Note that the kustomize bases used in this tutorial are stored in the [deploy](https://github.com/kubernetes/ingress-nginx/tree/main/deploy) folder of the GitHub repository [kubernetes/ingress-nginx](https://github.com/kubernetes/ingress-nginx).
- The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed.
- If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server.
- Running the following command deploys prometheus in Kubernetes:
```
kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/
```
#### Prometheus Dashboard
- Open Prometheus dashboard in a web browser:
```console
kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.103.59.201 <none> 80/TCP 3d
ingress-nginx NodePort 10.97.44.72 <none> 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h
prometheus-server NodePort 10.98.233.86 <none> 9090:32630/TCP 1m
```
- Obtain the IP address of the nodes in the running cluster:
```console
kubectl get nodes -o wide
```
- In some cases where the node only have internal IP addresses we need to execute:
```
kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath={.items[*].status.addresses[?\(@.type==\"InternalIP\"\)].address}
10.192.0.2 10.192.0.3 10.192.0.4
```
- Open your browser and visit the following URL: _http://{node IP address}:{prometheus-svc-nodeport}_ to load the Prometheus Dashboard.
- According to the above example, this URL will be http://10.192.0.3:32630

#### Grafana
- Install grafana using the below command
```
kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/grafana/
```
- Look at the services
```
kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.103.59.201 <none> 80/TCP 3d
ingress-nginx NodePort 10.97.44.72 <none> 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h
prometheus-server NodePort 10.98.233.86 <none> 9090:32630/TCP 10m
grafana NodePort 10.98.233.87 <none> 3000:31086/TCP 10m
```
- Open your browser and visit the following URL: _http://{node IP address}:{grafana-svc-nodeport}_ to load the Grafana Dashboard.
According to the above example, this URL will be http://10.192.0.3:31086
The username and password is `admin`
- After the login you can import the Grafana dashboard from [official dashboards](https://github.com/kubernetes/ingress-nginx/tree/main/deploy/grafana/dashboards), by following steps given below :
- Navigate to lefthand panel of grafana
- Hover on the gearwheel icon for Configuration and click "Data Sources"
- Click "Add data source"
- Select "Prometheus"
- Enter the details (note: I used http://CLUSTER_IP_PROMETHEUS_SVC:9090)
- Left menu (hover over +) -> Dashboard
- Click "Import"
- Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
- Click Import JSON
- Select the Prometheus data source
- Click "Import"

### Caveats
#### Wildcard ingresses
- By default request metrics are labeled with the hostname. When you have a wildcard domain ingress, then there will be no metrics for that ingress (to prevent the metrics from exploding in cardinality). To get metrics in this case you have two options:
- Run the ingress controller with `--metrics-per-host=false`. You will lose labeling by hostname, but still have labeling by ingress.
- Run the ingress controller with `--metrics-per-undefined-host=true --metrics-per-host=true`. You will get labeling by hostname even if the hostname is not explicitly defined on an ingress. Be warned that cardinality could explode due to many hostnames and CPU usage could also increase.
### Grafana dashboard using ingress resource
- If you want to expose the dashboard for grafana using an ingress resource, then you can :
- change the service type of the prometheus-server service and the grafana service to "ClusterIP" like this :
```
kubectl -n ingress-nginx edit svc grafana
```
- This will open the currently deployed service grafana in the default editor configured in your shell (vi/nvim/nano/other)
- scroll down to line 34 that looks like "type: NodePort"
- change it to look like "type: ClusterIP". Save and exit.
- create an ingress resource with backend as "grafana" and port as "3000"
- Similarly, you can edit the service "prometheus-server" and add an ingress resource.
## Prometheus and Grafana installation using Service Monitors
This document assumes you're using helm and using the kube-prometheus-stack package to install Prometheus and Grafana.
### Verify Ingress-Nginx Controller is installed
- The Ingress-Nginx Controller should already be deployed according to the deployment instructions [here](../deploy/index.md).
- To check if Ingress controller is deployed,
```
kubectl get pods -n ingress-nginx
```
- The result should look something like:
```
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-7c489dc7b7-ccrf6 1/1 Running 0 19h
```
### Verify Prometheus is installed
- To check if Prometheus is already deployed, run the following command:
```
helm ls -A
```
```
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ingress-nginx ingress-nginx 10 2022-01-20 18:08:55.267373 -0800 PST deployed ingress-nginx-4.0.16 1.1.1
prometheus prometheus 1 2022-01-20 16:07:25.086828 -0800 PST deployed kube-prometheus-stack-30.1.0 0.53.1
```
- Notice that prometheus is installed in a differenet namespace than ingress-nginx
- If prometheus is not installed, then you can install from [here](https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack)
### Re-configure Ingress-Nginx Controller
- The Ingress NGINX controller needs to be reconfigured for exporting metrics. This requires 3 additional configurations to the controller. These configurations are :
```
controller.metrics.enabled=true
controller.metrics.serviceMonitor.enabled=true
controller.metrics.serviceMonitor.additionalLabels.release="prometheus"
```
- The easiest way of doing this is to helm upgrade
```
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.metrics.enabled=true \
--set controller.metrics.serviceMonitor.enabled=true \
--set controller.metrics.serviceMonitor.additionalLabels.release="prometheus"
```
- Here `controller.metrics.serviceMonitor.additionalLabels.release="prometheus"` should match the name of the helm release of the `kube-prometheus-stack`
- You can validate that the controller has been successfully reconfigured to export metrics by looking at the values of the installed release, like this:
```
helm get values ingress-nginx --namespace ingress-nginx
```
```
controller:
metrics:
enabled: true
serviceMonitor:
additionalLabels:
release: prometheus
enabled: true
```
### Configure Prometheus
- Since Prometheus is running in a different namespace and not in the ingress-nginx namespace, it would not be able to discover ServiceMonitors in other namespaces when installed. Reconfigure your kube-prometheus-stack Helm installation to set `serviceMonitorSelectorNilUsesHelmValues` flag to false. By default, Prometheus only discovers PodMonitors within its own namespace. This should be disabled by setting `podMonitorSelectorNilUsesHelmValues` to false
- The configurations required are:
```
prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false
prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
```
- The easiest way of doing this is to use `helm upgrade ...`
```
helm upgrade prometheus prometheus-community/kube-prometheus-stack \
--namespace prometheus \
--set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \
--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
```
- You can validate that Prometheus has been reconfigured by looking at the values of the installed release, like this:
```
helm get values prometheus --namespace prometheus
```
- You should be able to see the values shown below:
```
prometheus:
prometheusSpec:
podMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelectorNilUsesHelmValues: false
```
### Connect and view Prometheus dashboard
- Port forward to Prometheus service. Find out the name of the prometheus service by using the following command:
```
kubectl get svc -n prometheus
```
The result of this command would look like:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 7h46m
prometheus-grafana ClusterIP 10.106.28.162 <none> 80/TCP 7h46m
prometheus-kube-prometheus-alertmanager ClusterIP 10.108.125.245 <none> 9093/TCP 7h46m
prometheus-kube-prometheus-operator ClusterIP 10.110.220.1 <none> 443/TCP 7h46m
prometheus-kube-prometheus-prometheus ClusterIP 10.102.72.134 <none> 9090/TCP 7h46m
prometheus-kube-state-metrics ClusterIP 10.104.231.181 <none> 8080/TCP 7h46m
prometheus-operated ClusterIP None <none> 9090/TCP 7h46m
prometheus-prometheus-node-exporter ClusterIP 10.96.247.128 <none> 9100/TCP 7h46m
```
prometheus-kube-prometheus-prometheus is the service we want to port forward to. We can do so using the following command:
```
kubectl port-forward svc/prometheus-kube-prometheus-prometheus -n prometheus 9090:9090
```
When you run the above command, you should see something like:
```
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090
```
- Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:9090

### Connect and view Grafana dashboard
- Port forward to Grafana service. Find out the name of the Grafana service by using the following command:
```
kubectl get svc -n prometheus
```
The result of this command would look like:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 7h46m
prometheus-grafana ClusterIP 10.106.28.162 <none> 80/TCP 7h46m
prometheus-kube-prometheus-alertmanager ClusterIP 10.108.125.245 <none> 9093/TCP 7h46m
prometheus-kube-prometheus-operator ClusterIP 10.110.220.1 <none> 443/TCP 7h46m
prometheus-kube-prometheus-prometheus ClusterIP 10.102.72.134 <none> 9090/TCP 7h46m
prometheus-kube-state-metrics ClusterIP 10.104.231.181 <none> 8080/TCP 7h46m
prometheus-operated ClusterIP None <none> 9090/TCP 7h46m
prometheus-prometheus-node-exporter ClusterIP 10.96.247.128 <none> 9100/TCP 7h46m
```
prometheus-grafana is the service we want to port forward to. We can do so using the following command:
```
kubectl port-forward svc/prometheus-grafana 3000:80 -n prometheus
```
When you run the above command, you should see something like:
```
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
```
- Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:3000
The default username/ password is admin/prom-operator
- After the login you can import the Grafana dashboard from [official dashboards](https://github.com/kubernetes/ingress-nginx/tree/main/deploy/grafana/dashboards), by following steps given below :
- Navigate to lefthand panel of grafana
- Hover on the gearwheel icon for Configuration and click "Data Sources"
- Click "Add data source"
- Select "Prometheus"
- Enter the details (note: I used http://10.102.72.134:9090 which is the CLUSTER-IP for Prometheus service)
- Left menu (hover over +) -> Dashboard
- Click "Import"
- Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
- Click Import JSON
- Select the Prometheus data source
- Click "Import"

## Exposed metrics
Prometheus metrics are exposed on port 10254.
### Request metrics
* `nginx_ingress_controller_request_duration_seconds` Histogram\
The request processing (time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client) time in seconds (affected by client speed).\
nginx var: `request_time`
* `nginx_ingress_controller_response_duration_seconds` Histogram\
The time spent on receiving the response from the upstream server in seconds (affected by client speed when the response is bigger than proxy buffers).\
Note: can be up to several millis bigger than the `nginx_ingress_controller_request_duration_seconds` because of the different measuring method.
nginx var: `upstream_response_time`
* `nginx_ingress_controller_header_duration_seconds` Histogram\
The time spent on receiving first header from the upstream server\
nginx var: `upstream_header_time`
* `nginx_ingress_controller_connect_duration_seconds` Histogram\
The time spent on establishing a connection with the upstream server\
nginx var: `upstream_connect_time`
* `nginx_ingress_controller_response_size` Histogram\
The response length (including request line, header, and request body)\
nginx var: `bytes_sent`
* `nginx_ingress_controller_request_size` Histogram\
The request length (including request line, header, and request body)\
nginx var: `request_length`
* `nginx_ingress_controller_requests` Counter\
The total number of client requests
* `nginx_ingress_controller_bytes_sent` Histogram\
The number of bytes sent to a client. **Deprecated**, use `nginx_ingress_controller_response_size`\
nginx var: `bytes_sent`
```
# HELP nginx_ingress_controller_bytes_sent The number of bytes sent to a client. DEPRECATED! Use nginx_ingress_controller_response_size
# TYPE nginx_ingress_controller_bytes_sent histogram
# HELP nginx_ingress_controller_connect_duration_seconds The time spent on establishing a connection with the upstream server
# TYPE nginx_ingress_controller_connect_duration_seconds nginx_ingress_controller_connect_duration_seconds
* HELP nginx_ingress_controller_header_duration_seconds The time spent on receiving first header from the upstream server
# TYPE nginx_ingress_controller_header_duration_seconds histogram
# HELP nginx_ingress_controller_request_duration_seconds The request processing time in milliseconds
# TYPE nginx_ingress_controller_request_duration_seconds histogram
# HELP nginx_ingress_controller_request_size The request length (including request line, header, and request body)
# TYPE nginx_ingress_controller_request_size histogram
# HELP nginx_ingress_controller_requests The total number of client requests.
# TYPE nginx_ingress_controller_requests counter
# HELP nginx_ingress_controller_response_duration_seconds The time spent on receiving the response from the upstream server
# TYPE nginx_ingress_controller_response_duration_seconds histogram
# HELP nginx_ingress_controller_response_size The response length (including request line, header, and request body)
# TYPE nginx_ingress_controller_response_size histogram
```
### Nginx process metrics
```
# HELP nginx_ingress_controller_nginx_process_connections current number of client connections with state {active, reading, writing, waiting}
# TYPE nginx_ingress_controller_nginx_process_connections gauge
# HELP nginx_ingress_controller_nginx_process_connections_total total number of connections with state {accepted, handled}
# TYPE nginx_ingress_controller_nginx_process_connections_total counter
# HELP nginx_ingress_controller_nginx_process_cpu_seconds_total Cpu usage in seconds
# TYPE nginx_ingress_controller_nginx_process_cpu_seconds_total counter
# HELP nginx_ingress_controller_nginx_process_num_procs number of processes
# TYPE nginx_ingress_controller_nginx_process_num_procs gauge
# HELP nginx_ingress_controller_nginx_process_oldest_start_time_seconds start time in seconds since 1970/01/01
# TYPE nginx_ingress_controller_nginx_process_oldest_start_time_seconds gauge
# HELP nginx_ingress_controller_nginx_process_read_bytes_total number of bytes read
# TYPE nginx_ingress_controller_nginx_process_read_bytes_total counter
# HELP nginx_ingress_controller_nginx_process_requests_total total number of client requests
# TYPE nginx_ingress_controller_nginx_process_requests_total counter
# HELP nginx_ingress_controller_nginx_process_resident_memory_bytes number of bytes of memory in use
# TYPE nginx_ingress_controller_nginx_process_resident_memory_bytes gauge
# HELP nginx_ingress_controller_nginx_process_virtual_memory_bytes number of bytes of memory in use
# TYPE nginx_ingress_controller_nginx_process_virtual_memory_bytes gauge
# HELP nginx_ingress_controller_nginx_process_write_bytes_total number of bytes written
# TYPE nginx_ingress_controller_nginx_process_write_bytes_total counter
```
### Controller metrics
```
# HELP nginx_ingress_controller_build_info A metric with a constant '1' labeled with information about the build.
# TYPE nginx_ingress_controller_build_info gauge
# HELP nginx_ingress_controller_check_success Cumulative number of Ingress controller syntax check operations
# TYPE nginx_ingress_controller_check_success counter
# HELP nginx_ingress_controller_config_hash Running configuration hash actually running
# TYPE nginx_ingress_controller_config_hash gauge
# HELP nginx_ingress_controller_config_last_reload_successful Whether the last configuration reload attempt was successful
# TYPE nginx_ingress_controller_config_last_reload_successful gauge
# HELP nginx_ingress_controller_config_last_reload_successful_timestamp_seconds Timestamp of the last successful configuration reload.
# TYPE nginx_ingress_controller_config_last_reload_successful_timestamp_seconds gauge
# HELP nginx_ingress_controller_ssl_certificate_info Hold all labels associated to a certificate
# TYPE nginx_ingress_controller_ssl_certificate_info gauge
# HELP nginx_ingress_controller_success Cumulative number of Ingress controller reload operations
# TYPE nginx_ingress_controller_success counter
# HELP nginx_ingress_controller_orphan_ingress Gauge reporting status of ingress orphanity, 1 indicates orphaned ingress. 'namespace' is the string used to identify namespace of ingress, 'ingress' for ingress name and 'type' for 'no-service' or 'no-endpoint' of orphanity
# TYPE nginx_ingress_controller_orphan_ingress gauge
```
### Admission metrics
```
# HELP nginx_ingress_controller_admission_config_size The size of the tested configuration
# TYPE nginx_ingress_controller_admission_config_size gauge
# HELP nginx_ingress_controller_admission_render_duration The processing duration of ingresses rendering by the admission controller (float seconds)
# TYPE nginx_ingress_controller_admission_render_duration gauge
# HELP nginx_ingress_controller_admission_render_ingresses The length of ingresses rendered by the admission controller
# TYPE nginx_ingress_controller_admission_render_ingresses gauge
# HELP nginx_ingress_controller_admission_roundtrip_duration The complete duration of the admission controller at the time to process a new event (float seconds)
# TYPE nginx_ingress_controller_admission_roundtrip_duration gauge
# HELP nginx_ingress_controller_admission_tested_duration The processing duration of the admission controller tests (float seconds)
# TYPE nginx_ingress_controller_admission_tested_duration gauge
# HELP nginx_ingress_controller_admission_tested_ingresses The length of ingresses processed by the admission controller
# TYPE nginx_ingress_controller_admission_tested_ingresses gauge
```
### Histogram buckets
You can configure buckets for histogram metrics using these command line options (here are their default values):
* `--time-buckets=[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]`
* `--length-buckets=[10, 20, 30, 40, 50, 60, 70, 80, 90, 100]`
* `--size-buckets=[10, 100, 1000, 10000, 100000, 1e+06, 1e+07]` | ingress nginx | Monitoring Two different methods to install and configure Prometheus and Grafana are described in this doc Prometheus and Grafana installation using Pod Annotations This installs Prometheus and Grafana in the same namespace as NGINX Ingress Prometheus and Grafana installation using Service Monitors This installs Prometheus and Grafana in two different namespaces This is the preferred method and helm charts supports this by default Prometheus and Grafana installation using Pod Annotations This tutorial will show you how to install Prometheus https prometheus io and Grafana https grafana com for scraping the metrics of the Ingress Nginx Controller important This example uses emptyDir volumes for Prometheus and Grafana This means once the pod gets terminated you will lose all the data Before You Begin The Ingress Nginx Controller should already be deployed according to the deployment instructions here deploy index md The controller should be configured for exporting metrics This requires 3 configurations to the controller These configurations are 1 controller metrics enabled true 2 controller podAnnotations prometheus io scrape true 3 controller podAnnotations prometheus io port 10254 The easiest way to configure the controller for metrics is via helm upgrade Assuming you have installed the ingress nginx controller as a helm release named ingress nginx then you can simply type the command shown below helm upgrade ingress nginx ingress nginx repo https kubernetes github io ingress nginx namespace ingress nginx set controller metrics enabled true set string controller podAnnotations prometheus io scrape true set string controller podAnnotations prometheus io port 10254 You can validate that the controller is configured for metrics by looking at the values of the installed release like this helm get values ingress nginx namespace ingress nginx You should be able to see the values shown below controller metrics enabled true podAnnotations prometheus io port 10254 prometheus io scrape true If you are not using helm you will have to edit your manifests like this Service manifest apiVersion v1 kind Service spec ports name prometheus port 10254 targetPort prometheus Deployment manifest apiVersion v1 kind Deployment spec template metadata annotations prometheus io scrape true prometheus io port 10254 spec containers name controller ports name prometheus containerPort 10254 Deploy and configure Prometheus Server Note that the kustomize bases used in this tutorial are stored in the deploy https github com kubernetes ingress nginx tree main deploy folder of the GitHub repository kubernetes ingress nginx https github com kubernetes ingress nginx The Prometheus server must be configured so that it can discover endpoints of services If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods no extra configuration is needed If there is no existing Prometheus server running the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server Running the following command deploys prometheus in Kubernetes kubectl apply kustomize github com kubernetes ingress nginx deploy prometheus Prometheus Dashboard Open Prometheus dashboard in a web browser console kubectl get svc n ingress nginx NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE default http backend ClusterIP 10 103 59 201 none 80 TCP 3d ingress nginx NodePort 10 97 44 72 none 80 30100 TCP 443 30154 TCP 10254 32049 TCP 5h prometheus server NodePort 10 98 233 86 none 9090 32630 TCP 1m Obtain the IP address of the nodes in the running cluster console kubectl get nodes o wide In some cases where the node only have internal IP addresses we need to execute kubectl get nodes selector kubernetes io role master o jsonpath items status addresses type InternalIP address 10 192 0 2 10 192 0 3 10 192 0 4 Open your browser and visit the following URL http node IP address prometheus svc nodeport to load the Prometheus Dashboard According to the above example this URL will be http 10 192 0 3 32630 Prometheus Dashboard images prometheus dashboard png Grafana Install grafana using the below command kubectl apply kustomize github com kubernetes ingress nginx deploy grafana Look at the services kubectl get svc n ingress nginx NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE default http backend ClusterIP 10 103 59 201 none 80 TCP 3d ingress nginx NodePort 10 97 44 72 none 80 30100 TCP 443 30154 TCP 10254 32049 TCP 5h prometheus server NodePort 10 98 233 86 none 9090 32630 TCP 10m grafana NodePort 10 98 233 87 none 3000 31086 TCP 10m Open your browser and visit the following URL http node IP address grafana svc nodeport to load the Grafana Dashboard According to the above example this URL will be http 10 192 0 3 31086 The username and password is admin After the login you can import the Grafana dashboard from official dashboards https github com kubernetes ingress nginx tree main deploy grafana dashboards by following steps given below Navigate to lefthand panel of grafana Hover on the gearwheel icon for Configuration and click Data Sources Click Add data source Select Prometheus Enter the details note I used http CLUSTER IP PROMETHEUS SVC 9090 Left menu hover over Dashboard Click Import Enter the copy pasted json from https raw githubusercontent com kubernetes ingress nginx main deploy grafana dashboards nginx json Click Import JSON Select the Prometheus data source Click Import Grafana Dashboard images grafana png Caveats Wildcard ingresses By default request metrics are labeled with the hostname When you have a wildcard domain ingress then there will be no metrics for that ingress to prevent the metrics from exploding in cardinality To get metrics in this case you have two options Run the ingress controller with metrics per host false You will lose labeling by hostname but still have labeling by ingress Run the ingress controller with metrics per undefined host true metrics per host true You will get labeling by hostname even if the hostname is not explicitly defined on an ingress Be warned that cardinality could explode due to many hostnames and CPU usage could also increase Grafana dashboard using ingress resource If you want to expose the dashboard for grafana using an ingress resource then you can change the service type of the prometheus server service and the grafana service to ClusterIP like this kubectl n ingress nginx edit svc grafana This will open the currently deployed service grafana in the default editor configured in your shell vi nvim nano other scroll down to line 34 that looks like type NodePort change it to look like type ClusterIP Save and exit create an ingress resource with backend as grafana and port as 3000 Similarly you can edit the service prometheus server and add an ingress resource Prometheus and Grafana installation using Service Monitors This document assumes you re using helm and using the kube prometheus stack package to install Prometheus and Grafana Verify Ingress Nginx Controller is installed The Ingress Nginx Controller should already be deployed according to the deployment instructions here deploy index md To check if Ingress controller is deployed kubectl get pods n ingress nginx The result should look something like NAME READY STATUS RESTARTS AGE ingress nginx controller 7c489dc7b7 ccrf6 1 1 Running 0 19h Verify Prometheus is installed To check if Prometheus is already deployed run the following command helm ls A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ingress nginx ingress nginx 10 2022 01 20 18 08 55 267373 0800 PST deployed ingress nginx 4 0 16 1 1 1 prometheus prometheus 1 2022 01 20 16 07 25 086828 0800 PST deployed kube prometheus stack 30 1 0 0 53 1 Notice that prometheus is installed in a differenet namespace than ingress nginx If prometheus is not installed then you can install from here https artifacthub io packages helm prometheus community kube prometheus stack Re configure Ingress Nginx Controller The Ingress NGINX controller needs to be reconfigured for exporting metrics This requires 3 additional configurations to the controller These configurations are controller metrics enabled true controller metrics serviceMonitor enabled true controller metrics serviceMonitor additionalLabels release prometheus The easiest way of doing this is to helm upgrade helm upgrade ingress nginx ingress nginx ingress nginx namespace ingress nginx set controller metrics enabled true set controller metrics serviceMonitor enabled true set controller metrics serviceMonitor additionalLabels release prometheus Here controller metrics serviceMonitor additionalLabels release prometheus should match the name of the helm release of the kube prometheus stack You can validate that the controller has been successfully reconfigured to export metrics by looking at the values of the installed release like this helm get values ingress nginx namespace ingress nginx controller metrics enabled true serviceMonitor additionalLabels release prometheus enabled true Configure Prometheus Since Prometheus is running in a different namespace and not in the ingress nginx namespace it would not be able to discover ServiceMonitors in other namespaces when installed Reconfigure your kube prometheus stack Helm installation to set serviceMonitorSelectorNilUsesHelmValues flag to false By default Prometheus only discovers PodMonitors within its own namespace This should be disabled by setting podMonitorSelectorNilUsesHelmValues to false The configurations required are prometheus prometheusSpec podMonitorSelectorNilUsesHelmValues false prometheus prometheusSpec serviceMonitorSelectorNilUsesHelmValues false The easiest way of doing this is to use helm upgrade helm upgrade prometheus prometheus community kube prometheus stack namespace prometheus set prometheus prometheusSpec podMonitorSelectorNilUsesHelmValues false set prometheus prometheusSpec serviceMonitorSelectorNilUsesHelmValues false You can validate that Prometheus has been reconfigured by looking at the values of the installed release like this helm get values prometheus namespace prometheus You should be able to see the values shown below prometheus prometheusSpec podMonitorSelectorNilUsesHelmValues false serviceMonitorSelectorNilUsesHelmValues false Connect and view Prometheus dashboard Port forward to Prometheus service Find out the name of the prometheus service by using the following command kubectl get svc n prometheus The result of this command would look like NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE alertmanager operated ClusterIP None none 9093 TCP 9094 TCP 9094 UDP 7h46m prometheus grafana ClusterIP 10 106 28 162 none 80 TCP 7h46m prometheus kube prometheus alertmanager ClusterIP 10 108 125 245 none 9093 TCP 7h46m prometheus kube prometheus operator ClusterIP 10 110 220 1 none 443 TCP 7h46m prometheus kube prometheus prometheus ClusterIP 10 102 72 134 none 9090 TCP 7h46m prometheus kube state metrics ClusterIP 10 104 231 181 none 8080 TCP 7h46m prometheus operated ClusterIP None none 9090 TCP 7h46m prometheus prometheus node exporter ClusterIP 10 96 247 128 none 9100 TCP 7h46m prometheus kube prometheus prometheus is the service we want to port forward to We can do so using the following command kubectl port forward svc prometheus kube prometheus prometheus n prometheus 9090 9090 When you run the above command you should see something like Forwarding from 127 0 0 1 9090 9090 Forwarding from 1 9090 9090 Open your browser and visit the following URL http localhost port forwarded port according to the above example it would be http localhost 9090 Prometheus Dashboard images prometheus dashboard1 png Connect and view Grafana dashboard Port forward to Grafana service Find out the name of the Grafana service by using the following command kubectl get svc n prometheus The result of this command would look like NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE alertmanager operated ClusterIP None none 9093 TCP 9094 TCP 9094 UDP 7h46m prometheus grafana ClusterIP 10 106 28 162 none 80 TCP 7h46m prometheus kube prometheus alertmanager ClusterIP 10 108 125 245 none 9093 TCP 7h46m prometheus kube prometheus operator ClusterIP 10 110 220 1 none 443 TCP 7h46m prometheus kube prometheus prometheus ClusterIP 10 102 72 134 none 9090 TCP 7h46m prometheus kube state metrics ClusterIP 10 104 231 181 none 8080 TCP 7h46m prometheus operated ClusterIP None none 9090 TCP 7h46m prometheus prometheus node exporter ClusterIP 10 96 247 128 none 9100 TCP 7h46m prometheus grafana is the service we want to port forward to We can do so using the following command kubectl port forward svc prometheus grafana 3000 80 n prometheus When you run the above command you should see something like Forwarding from 127 0 0 1 3000 3000 Forwarding from 1 3000 3000 Open your browser and visit the following URL http localhost port forwarded port according to the above example it would be http localhost 3000 The default username password is admin prom operator After the login you can import the Grafana dashboard from official dashboards https github com kubernetes ingress nginx tree main deploy grafana dashboards by following steps given below Navigate to lefthand panel of grafana Hover on the gearwheel icon for Configuration and click Data Sources Click Add data source Select Prometheus Enter the details note I used http 10 102 72 134 9090 which is the CLUSTER IP for Prometheus service Left menu hover over Dashboard Click Import Enter the copy pasted json from https raw githubusercontent com kubernetes ingress nginx main deploy grafana dashboards nginx json Click Import JSON Select the Prometheus data source Click Import Grafana Dashboard images grafana dashboard1 png Exposed metrics Prometheus metrics are exposed on port 10254 Request metrics nginx ingress controller request duration seconds Histogram The request processing time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client time in seconds affected by client speed nginx var request time nginx ingress controller response duration seconds Histogram The time spent on receiving the response from the upstream server in seconds affected by client speed when the response is bigger than proxy buffers Note can be up to several millis bigger than the nginx ingress controller request duration seconds because of the different measuring method nginx var upstream response time nginx ingress controller header duration seconds Histogram The time spent on receiving first header from the upstream server nginx var upstream header time nginx ingress controller connect duration seconds Histogram The time spent on establishing a connection with the upstream server nginx var upstream connect time nginx ingress controller response size Histogram The response length including request line header and request body nginx var bytes sent nginx ingress controller request size Histogram The request length including request line header and request body nginx var request length nginx ingress controller requests Counter The total number of client requests nginx ingress controller bytes sent Histogram The number of bytes sent to a client Deprecated use nginx ingress controller response size nginx var bytes sent HELP nginx ingress controller bytes sent The number of bytes sent to a client DEPRECATED Use nginx ingress controller response size TYPE nginx ingress controller bytes sent histogram HELP nginx ingress controller connect duration seconds The time spent on establishing a connection with the upstream server TYPE nginx ingress controller connect duration seconds nginx ingress controller connect duration seconds HELP nginx ingress controller header duration seconds The time spent on receiving first header from the upstream server TYPE nginx ingress controller header duration seconds histogram HELP nginx ingress controller request duration seconds The request processing time in milliseconds TYPE nginx ingress controller request duration seconds histogram HELP nginx ingress controller request size The request length including request line header and request body TYPE nginx ingress controller request size histogram HELP nginx ingress controller requests The total number of client requests TYPE nginx ingress controller requests counter HELP nginx ingress controller response duration seconds The time spent on receiving the response from the upstream server TYPE nginx ingress controller response duration seconds histogram HELP nginx ingress controller response size The response length including request line header and request body TYPE nginx ingress controller response size histogram Nginx process metrics HELP nginx ingress controller nginx process connections current number of client connections with state active reading writing waiting TYPE nginx ingress controller nginx process connections gauge HELP nginx ingress controller nginx process connections total total number of connections with state accepted handled TYPE nginx ingress controller nginx process connections total counter HELP nginx ingress controller nginx process cpu seconds total Cpu usage in seconds TYPE nginx ingress controller nginx process cpu seconds total counter HELP nginx ingress controller nginx process num procs number of processes TYPE nginx ingress controller nginx process num procs gauge HELP nginx ingress controller nginx process oldest start time seconds start time in seconds since 1970 01 01 TYPE nginx ingress controller nginx process oldest start time seconds gauge HELP nginx ingress controller nginx process read bytes total number of bytes read TYPE nginx ingress controller nginx process read bytes total counter HELP nginx ingress controller nginx process requests total total number of client requests TYPE nginx ingress controller nginx process requests total counter HELP nginx ingress controller nginx process resident memory bytes number of bytes of memory in use TYPE nginx ingress controller nginx process resident memory bytes gauge HELP nginx ingress controller nginx process virtual memory bytes number of bytes of memory in use TYPE nginx ingress controller nginx process virtual memory bytes gauge HELP nginx ingress controller nginx process write bytes total number of bytes written TYPE nginx ingress controller nginx process write bytes total counter Controller metrics HELP nginx ingress controller build info A metric with a constant 1 labeled with information about the build TYPE nginx ingress controller build info gauge HELP nginx ingress controller check success Cumulative number of Ingress controller syntax check operations TYPE nginx ingress controller check success counter HELP nginx ingress controller config hash Running configuration hash actually running TYPE nginx ingress controller config hash gauge HELP nginx ingress controller config last reload successful Whether the last configuration reload attempt was successful TYPE nginx ingress controller config last reload successful gauge HELP nginx ingress controller config last reload successful timestamp seconds Timestamp of the last successful configuration reload TYPE nginx ingress controller config last reload successful timestamp seconds gauge HELP nginx ingress controller ssl certificate info Hold all labels associated to a certificate TYPE nginx ingress controller ssl certificate info gauge HELP nginx ingress controller success Cumulative number of Ingress controller reload operations TYPE nginx ingress controller success counter HELP nginx ingress controller orphan ingress Gauge reporting status of ingress orphanity 1 indicates orphaned ingress namespace is the string used to identify namespace of ingress ingress for ingress name and type for no service or no endpoint of orphanity TYPE nginx ingress controller orphan ingress gauge Admission metrics HELP nginx ingress controller admission config size The size of the tested configuration TYPE nginx ingress controller admission config size gauge HELP nginx ingress controller admission render duration The processing duration of ingresses rendering by the admission controller float seconds TYPE nginx ingress controller admission render duration gauge HELP nginx ingress controller admission render ingresses The length of ingresses rendered by the admission controller TYPE nginx ingress controller admission render ingresses gauge HELP nginx ingress controller admission roundtrip duration The complete duration of the admission controller at the time to process a new event float seconds TYPE nginx ingress controller admission roundtrip duration gauge HELP nginx ingress controller admission tested duration The processing duration of the admission controller tests float seconds TYPE nginx ingress controller admission tested duration gauge HELP nginx ingress controller admission tested ingresses The length of ingresses processed by the admission controller TYPE nginx ingress controller admission tested ingresses gauge Histogram buckets You can configure buckets for histogram metrics using these command line options here are their default values time buckets 0 005 0 01 0 025 0 05 0 1 0 25 0 5 1 2 5 5 10 length buckets 10 20 30 40 50 60 70 80 90 100 size buckets 10 100 1000 10000 100000 1e 06 1e 07 |
ingress nginx The ConfigMap API resource stores configuration data as key value pairs The data provides the configurations for system components for the nginx controller ConfigMaps ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable you can add key value pairs to the data section of the config map For Example In order to overwrite nginx controller configuration values as seen in | # ConfigMaps
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system
components for the nginx-controller.
In order to overwrite nginx-controller configuration values as seen in [config.go](https://github.com/kubernetes/ingress-nginx/blob/main/internal/ingress/controller/config/config.go),
you can add key-value pairs to the data section of the config-map. For Example:
```yaml
data:
map-hash-bucket-size: "128"
ssl-protocols: SSLv2
```
!!! important
The key and values in a ConfigMap can only be strings.
This means that we want a value with boolean values we need to quote the values, like "true" or "false".
Same for numbers, like "100".
"Slice" types (defined below as `[]string` or `[]int`) can be provided as a comma-delimited string.
## Configuration options
The following table shows a configuration option's name, type, and the default value:
| name | type | default | notes |
|:--------------------------------------------------------------------------------|:-------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| [add-headers](#add-headers) | string | "" | |
| [allow-backend-server-header](#allow-backend-server-header) | bool | "false" | |
| [allow-cross-namespace-resources](#allow-cross-namespace-resources) | bool | "false" | |
| [allow-snippet-annotations](#allow-snippet-annotations) | bool | "false" | |
| [annotations-risk-level](#annotations-risk-level) | string | High | |
| [annotation-value-word-blocklist](#annotation-value-word-blocklist) | string array | "" | |
| [hide-headers](#hide-headers) | string array | empty | |
| [access-log-params](#access-log-params) | string | "" | |
| [access-log-path](#access-log-path) | string | "/var/log/nginx/access.log" | |
| [http-access-log-path](#http-access-log-path) | string | "" | |
| [stream-access-log-path](#stream-access-log-path) | string | "" | |
| [enable-access-log-for-default-backend](#enable-access-log-for-default-backend) | bool | "false" | |
| [error-log-path](#error-log-path) | string | "/var/log/nginx/error.log" | |
| [enable-modsecurity](#enable-modsecurity) | bool | "false" | |
| [modsecurity-snippet](#modsecurity-snippet) | string | "" | |
| [enable-owasp-modsecurity-crs](#enable-owasp-modsecurity-crs) | bool | "false" | |
| [client-header-buffer-size](#client-header-buffer-size) | string | "1k" | |
| [client-header-timeout](#client-header-timeout) | int | 60 | |
| [client-body-buffer-size](#client-body-buffer-size) | string | "8k" | |
| [client-body-timeout](#client-body-timeout) | int | 60 | |
| [disable-access-log](#disable-access-log) | bool | "false" | |
| [disable-ipv6](#disable-ipv6) | bool | "false" | |
| [disable-ipv6-dns](#disable-ipv6-dns) | bool | "false" | |
| [enable-underscores-in-headers](#enable-underscores-in-headers) | bool | "false" | |
| [enable-ocsp](#enable-ocsp) | bool | "false" | |
| [ignore-invalid-headers](#ignore-invalid-headers) | bool | "true" | |
| [retry-non-idempotent](#retry-non-idempotent) | bool | "false" | |
| [error-log-level](#error-log-level) | string | "notice" | |
| [http2-max-field-size](#http2-max-field-size) | string | "" | DEPRECATED in favour of [large_client_header_buffers](#large-client-header-buffers) |
| [http2-max-header-size](#http2-max-header-size) | string | "" | DEPRECATED in favour of [large_client_header_buffers](#large-client-header-buffers) |
| [http2-max-requests](#http2-max-requests) | int | 0 | DEPRECATED in favour of [keepalive_requests](#keepalive-requests) |
| [http2-max-concurrent-streams](#http2-max-concurrent-streams) | int | 128 | |
| [hsts](#hsts) | bool | "true" | |
| [hsts-include-subdomains](#hsts-include-subdomains) | bool | "true" | |
| [hsts-max-age](#hsts-max-age) | string | "31536000" | |
| [hsts-preload](#hsts-preload) | bool | "false" | |
| [keep-alive](#keep-alive) | int | 75 | |
| [keep-alive-requests](#keep-alive-requests) | int | 1000 | |
| [large-client-header-buffers](#large-client-header-buffers) | string | "4 8k" | |
| [log-format-escape-none](#log-format-escape-none) | bool | "false" | |
| [log-format-escape-json](#log-format-escape-json) | bool | "false" | |
| [log-format-upstream](#log-format-upstream) | string | `$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id` | |
| [log-format-stream](#log-format-stream) | string | `[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time` | |
| [enable-multi-accept](#enable-multi-accept) | bool | "true" | |
| [max-worker-connections](#max-worker-connections) | int | 16384 | |
| [max-worker-open-files](#max-worker-open-files) | int | 0 | |
| [map-hash-bucket-size](#max-hash-bucket-size) | int | 64 | |
| [nginx-status-ipv4-whitelist](#nginx-status-ipv4-whitelist) | []string | "127.0.0.1" | |
| [nginx-status-ipv6-whitelist](#nginx-status-ipv6-whitelist) | []string | "::1" | |
| [proxy-real-ip-cidr](#proxy-real-ip-cidr) | []string | "0.0.0.0/0" | |
| [proxy-set-headers](#proxy-set-headers) | string | "" | |
| [server-name-hash-max-size](#server-name-hash-max-size) | int | 1024 | |
| [server-name-hash-bucket-size](#server-name-hash-bucket-size) | int | `<size of the processor’s cache line>` |
| [proxy-headers-hash-max-size](#proxy-headers-hash-max-size) | int | 512 | |
| [proxy-headers-hash-bucket-size](#proxy-headers-hash-bucket-size) | int | 64 | |
| [reuse-port](#reuse-port) | bool | "true" | |
| [server-tokens](#server-tokens) | bool | "false" | |
| [ssl-ciphers](#ssl-ciphers) | string | "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384" | |
| [ssl-ecdh-curve](#ssl-ecdh-curve) | string | "auto" | |
| [ssl-dh-param](#ssl-dh-param) | string | "" | |
| [ssl-protocols](#ssl-protocols) | string | "TLSv1.2 TLSv1.3" | |
| [ssl-session-cache](#ssl-session-cache) | bool | "true" | |
| [ssl-session-cache-size](#ssl-session-cache-size) | string | "10m" | |
| [ssl-session-tickets](#ssl-session-tickets) | bool | "false" | |
| [ssl-session-ticket-key](#ssl-session-ticket-key) | string | `<Randomly Generated>` |
| [ssl-session-timeout](#ssl-session-timeout) | string | "10m" | |
| [ssl-buffer-size](#ssl-buffer-size) | string | "4k" | |
| [use-proxy-protocol](#use-proxy-protocol) | bool | "false" | |
| [proxy-protocol-header-timeout](#proxy-protocol-header-timeout) | string | "5s" | |
| [enable-aio-write](#enable-aio-write) | bool | "true" | |
| [use-gzip](#use-gzip) | bool | "false" | |
| [use-geoip](#use-geoip) | bool | "true" | |
| [use-geoip2](#use-geoip2) | bool | "false" | |
| [geoip2-autoreload-in-minutes](#geoip2-autoreload-in-minutes) | int | "0" | |
| [enable-brotli](#enable-brotli) | bool | "false" | |
| [brotli-level](#brotli-level) | int | 4 | |
| [brotli-min-length](#brotli-min-length) | int | 20 | |
| [brotli-types](#brotli-types) | string | "application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component" | |
| [use-http2](#use-http2) | bool | "true" | |
| [gzip-disable](#gzip-disable) | string | "" | |
| [gzip-level](#gzip-level) | int | 1 | |
| [gzip-min-length](#gzip-min-length) | int | 256 | |
| [gzip-types](#gzip-types) | string | "application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component" | |
| [worker-processes](#worker-processes) | string | `<Number of CPUs>` | |
| [worker-cpu-affinity](#worker-cpu-affinity) | string | "" | |
| [worker-shutdown-timeout](#worker-shutdown-timeout) | string | "240s" | |
| [enable-serial-reloads](#enable-serial-reloads) | bool | "false" | |
| [load-balance](#load-balance) | string | "round_robin" | |
| [variables-hash-bucket-size](#variables-hash-bucket-size) | int | 128 | |
| [variables-hash-max-size](#variables-hash-max-size) | int | 2048 | |
| [upstream-keepalive-connections](#upstream-keepalive-connections) | int | 320 | |
| [upstream-keepalive-time](#upstream-keepalive-time) | string | "1h" | |
| [upstream-keepalive-timeout](#upstream-keepalive-timeout) | int | 60 | |
| [upstream-keepalive-requests](#upstream-keepalive-requests) | int | 10000 | |
| [limit-conn-zone-variable](#limit-conn-zone-variable) | string | "$binary_remote_addr" | |
| [proxy-stream-timeout](#proxy-stream-timeout) | string | "600s" | |
| [proxy-stream-next-upstream](#proxy-stream-next-upstream) | bool | "true" | |
| [proxy-stream-next-upstream-timeout](#proxy-stream-next-upstream-timeout) | string | "600s" | |
| [proxy-stream-next-upstream-tries](#proxy-stream-next-upstream-tries) | int | 3 | |
| [proxy-stream-responses](#proxy-stream-responses) | int | 1 | |
| [bind-address](#bind-address) | []string | "" | |
| [use-forwarded-headers](#use-forwarded-headers) | bool | "false" | |
| [enable-real-ip](#enable-real-ip) | bool | "false" | |
| [forwarded-for-header](#forwarded-for-header) | string | "X-Forwarded-For" | |
| [compute-full-forwarded-for](#compute-full-forwarded-for) | bool | "false" | |
| [proxy-add-original-uri-header](#proxy-add-original-uri-header) | bool | "false" | |
| [generate-request-id](#generate-request-id) | bool | "true" | |
| [jaeger-collector-host](#jaeger-collector-host) | string | "" | |
| [jaeger-collector-port](#jaeger-collector-port) | int | 6831 | |
| [jaeger-endpoint](#jaeger-endpoint) | string | "" | |
| [jaeger-service-name](#jaeger-service-name) | string | "nginx" | |
| [jaeger-propagation-format](#jaeger-propagation-format) | string | "jaeger" | |
| [jaeger-sampler-type](#jaeger-sampler-type) | string | "const" | |
| [jaeger-sampler-param](#jaeger-sampler-param) | string | "1" | |
| [jaeger-sampler-host](#jaeger-sampler-host) | string | "http://127.0.0.1" | |
| [jaeger-sampler-port](#jaeger-sampler-port) | int | 5778 | |
| [jaeger-trace-context-header-name](#jaeger-trace-context-header-name) | string | uber-trace-id | |
| [jaeger-debug-header](#jaeger-debug-header) | string | uber-debug-id | |
| [jaeger-baggage-header](#jaeger-baggage-header) | string | jaeger-baggage | |
| [jaeger-trace-baggage-header-prefix](#jaeger-trace-baggage-header-prefix) | string | uberctx- | |
| [datadog-collector-host](#datadog-collector-host) | string | "" | |
| [datadog-collector-port](#datadog-collector-port) | int | 8126 | |
| [datadog-service-name](#datadog-service-name) | string | "nginx" | |
| [datadog-environment](#datadog-environment) | string | "prod" | |
| [datadog-operation-name-override](#datadog-operation-name-override) | string | "nginx.handle" | |
| [datadog-priority-sampling](#datadog-priority-sampling) | bool | "true" | |
| [datadog-sample-rate](#datadog-sample-rate) | float | 1.0 | |
| [enable-opentelemetry](#enable-opentelemetry) | bool | "false" | |
| [opentelemetry-trust-incoming-span](#opentelemetry-trust-incoming-span) | bool | "true" | |
| [opentelemetry-operation-name](#opentelemetry-operation-name) | string | "" | |
| [opentelemetry-config](#/etc/nginx/opentelemetry.toml) | string | "/etc/nginx/opentelemetry.toml" | |
| [otlp-collector-host](#otlp-collector-host) | string | "" | |
| [otlp-collector-port](#otlp-collector-port) | int | 4317 | |
| [otel-max-queuesize](#otel-max-queuesize) | int | | |
| [otel-schedule-delay-millis](#otel-schedule-delay-millis) | int | | |
| [otel-max-export-batch-size](#otel-max-export-batch-size) | int | | |
| [otel-service-name](#otel-service-name) | string | "nginx" | |
| [otel-sampler](#otel-sampler) | string | "AlwaysOff" | |
| [otel-sampler-parent-based](#otel-sampler-parent-based) | bool | "false" | |
| [otel-sampler-ratio](#otel-sampler-ratio) | float | 0.01 | |
| [main-snippet](#main-snippet) | string | "" | |
| [http-snippet](#http-snippet) | string | "" | |
| [server-snippet](#server-snippet) | string | "" | |
| [stream-snippet](#stream-snippet) | string | "" | |
| [location-snippet](#location-snippet) | string | "" | |
| [custom-http-errors](#custom-http-errors) | []int | []int{} | |
| [proxy-body-size](#proxy-body-size) | string | "1m" | |
| [proxy-connect-timeout](#proxy-connect-timeout) | int | 5 | |
| [proxy-read-timeout](#proxy-read-timeout) | int | 60 | |
| [proxy-send-timeout](#proxy-send-timeout) | int | 60 | |
| [proxy-buffers-number](#proxy-buffers-number) | int | 4 | |
| [proxy-buffer-size](#proxy-buffer-size) | string | "4k" | |
| [proxy-busy-buffers-size](#proxy-busy-buffers-size) | string | "8k" | |
| [proxy-cookie-path](#proxy-cookie-path) | string | "off" | |
| [proxy-cookie-domain](#proxy-cookie-domain) | string | "off" | |
| [proxy-next-upstream](#proxy-next-upstream) | string | "error timeout" | |
| [proxy-next-upstream-timeout](#proxy-next-upstream-timeout) | int | 0 | |
| [proxy-next-upstream-tries](#proxy-next-upstream-tries) | int | 3 | |
| [proxy-redirect-from](#proxy-redirect-from) | string | "off" | |
| [proxy-request-buffering](#proxy-request-buffering) | string | "on" | |
| [ssl-redirect](#ssl-redirect) | bool | "true" | |
| [force-ssl-redirect](#force-ssl-redirect) | bool | "false" | |
| [denylist-source-range](#denylist-source-range) | []string | []string{} | |
| [whitelist-source-range](#whitelist-source-range) | []string | []string{} | |
| [skip-access-log-urls](#skip-access-log-urls) | []string | []string{} | |
| [limit-rate](#limit-rate) | int | 0 | |
| [limit-rate-after](#limit-rate-after) | int | 0 | |
| [lua-shared-dicts](#lua-shared-dicts) | string | "" | |
| [http-redirect-code](#http-redirect-code) | int | 308 | |
| [proxy-buffering](#proxy-buffering) | string | "off" | |
| [limit-req-status-code](#limit-req-status-code) | int | 503 | |
| [limit-conn-status-code](#limit-conn-status-code) | int | 503 | |
| [enable-syslog](#enable-syslog) | bool | "false" | |
| [syslog-host](#syslog-host) | string | "" | |
| [syslog-port](#syslog-port) | int | 514 | |
| [no-tls-redirect-locations](#no-tls-redirect-locations) | string | "/.well-known/acme-challenge" | |
| [global-allowed-response-headers](#global-allowed-response-headers) | string | "" | |
| [global-auth-url](#global-auth-url) | string | "" | |
| [global-auth-method](#global-auth-method) | string | "" | |
| [global-auth-signin](#global-auth-signin) | string | "" | |
| [global-auth-signin-redirect-param](#global-auth-signin-redirect-param) | string | "rd" | |
| [global-auth-response-headers](#global-auth-response-headers) | string | "" | |
| [global-auth-request-redirect](#global-auth-request-redirect) | string | "" | |
| [global-auth-snippet](#global-auth-snippet) | string | "" | |
| [global-auth-cache-key](#global-auth-cache-key) | string | "" | |
| [global-auth-cache-duration](#global-auth-cache-duration) | string | "200 202 401 5m" | |
| [no-auth-locations](#no-auth-locations) | string | "/.well-known/acme-challenge" | |
| [block-cidrs](#block-cidrs) | []string | "" | |
| [block-user-agents](#block-user-agents) | []string | "" | |
| [block-referers](#block-referers) | []string | "" | |
| [proxy-ssl-location-only](#proxy-ssl-location-only) | bool | "false" | |
| [default-type](#default-type) | string | "text/html" | |
| [service-upstream](#service-upstream) | bool | "false" | |
| [ssl-reject-handshake](#ssl-reject-handshake) | bool | "false" | |
| [debug-connections](#debug-connections) | []string | "127.0.0.1,1.1.1.1/24" | |
| [strict-validate-path-type](#strict-validate-path-type) | bool | "true" | |
| [grpc-buffer-size-kb](#grpc-buffer-size-kb) | int | 0 | |
| [relative-redirects](#relative-redirects) | bool | false | |
## add-headers
Sets custom headers from named configmap before sending traffic to the client. See [proxy-set-headers](#proxy-set-headers). [example](https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-headers)
## allow-backend-server-header
Enables the return of the header Server from the backend instead of the generic nginx string. _**default:**_ is disabled
## allow-cross-namespace-resources
Enables users to consume cross namespace resource on annotations, when was previously enabled . _**default:**_ false
**Annotations that may be impacted with this change**:
* `auth-secret`
* `auth-proxy-set-header`
* `auth-tls-secret`
* `fastcgi-params-configmap`
* `proxy-ssl-secret`
## allow-snippet-annotations
Enables Ingress to parse and add *-snippet annotations/directives created by the user. _**default:**_ `false`
Warning: We recommend enabling this option only if you TRUST users with permission to create Ingress objects, as this
may allow a user to add restricted configurations to the final nginx.conf file
## annotations-risk-level
Represents the risk accepted on an annotation. If the risk is, for instance `Medium`, annotations with risk High and Critical will not be accepted.
Accepted values are `Critical`, `High`, `Medium` and `Low`.
_**default:**_ `High`
## annotation-value-word-blocklist
Contains a comma-separated value of chars/words that are well known of being used to abuse Ingress configuration
and must be blocked. Related to [CVE-2021-25742](https://github.com/kubernetes/ingress-nginx/issues/7837)
When an annotation is detected with a value that matches one of the blocked bad words, the whole Ingress won't be configured.
_**default:**_ `""`
When doing this, the default blocklist is override, which means that the Ingress admin should add all the words
that should be blocked, here is a suggested block list.
_**suggested:**_ `"load_module,lua_package,_by_lua,location,root,proxy_pass,serviceaccount,{,},',\""`
## hide-headers
Sets additional header that will not be passed from the upstream server to the client response.
_**default:**_ empty
_References:_
[https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header)
## access-log-params
Additional params for access_log. For example, buffer=16k, gzip, flush=1m
_References:_
[https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log](https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log)
## access-log-path
Access log path for both http and stream context. Goes to `/var/log/nginx/access.log` by default.
__Note:__ the file `/var/log/nginx/access.log` is a symlink to `/dev/stdout`
## http-access-log-path
Access log path for http context globally.
_**default:**_ ""
__Note:__ If not specified, the `access-log-path` will be used.
## stream-access-log-path
Access log path for stream context globally.
_**default:**_ ""
__Note:__ If not specified, the `access-log-path` will be used.
## enable-access-log-for-default-backend
Enables logging access to default backend. _**default:**_ is disabled.
## error-log-path
Error log path. Goes to `/var/log/nginx/error.log` by default.
__Note:__ the file `/var/log/nginx/error.log` is a symlink to `/dev/stderr`
_References:_
[https://nginx.org/en/docs/ngx_core_module.html#error_log](https://nginx.org/en/docs/ngx_core_module.html#error_log)
## enable-modsecurity
Enables the modsecurity module for NGINX. _**default:**_ is disabled
## enable-owasp-modsecurity-crs
Enables the OWASP ModSecurity Core Rule Set (CRS). _**default:**_ is disabled
## modsecurity-snippet
Adds custom rules to modsecurity section of nginx configuration
## client-header-buffer-size
Allows to configure a custom buffer size for reading client request header.
_References:_
[https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size)
## client-header-timeout
Defines a timeout for reading client request header, in seconds.
_References:_
[https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout)
## client-body-buffer-size
Sets buffer size for reading client request body.
_References:_
[https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size)
## client-body-timeout
Defines a timeout for reading client request body, in seconds.
_References:_
[https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout)
## disable-access-log
Disables the Access Log from the entire Ingress Controller. _**default:**_ `false`
_References:_
[https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log](https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log)
## disable-ipv6
Disable listening on IPV6. _**default:**_ `false`; IPv6 listening is enabled
## disable-ipv6-dns
Disable IPV6 for nginx DNS resolver. _**default:**_ `false`; IPv6 resolving enabled.
## enable-underscores-in-headers
Enables underscores in header names. _**default:**_ is disabled
## enable-ocsp
Enables [Online Certificate Status Protocol stapling](https://en.wikipedia.org/wiki/OCSP_stapling) (OCSP) support.
_**default:**_ is disabled
## ignore-invalid-headers
Set if header fields with invalid names should be ignored.
_**default:**_ is enabled
## retry-non-idempotent
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value "true".
## error-log-level
Configures the logging level of errors. Log levels above are listed in the order of increasing severity.
_References:_
[https://nginx.org/en/docs/ngx_core_module.html#error_log](https://nginx.org/en/docs/ngx_core_module.html#error_log)
## http2-max-field-size
!!! warning
This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use [large-client-header-buffers](#large-client-header-buffers) instead.
Limits the maximum size of an HPACK-compressed request header field.
_References:_
[https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size](https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size)
## http2-max-header-size
!!! warning
This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use [large-client-header-buffers](#large-client-header-buffers) instead.
Limits the maximum size of the entire request header list after HPACK decompression.
_References:_
[https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size](https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size)
## http2-max-requests
!!! warning
This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use [upstream-keepalive-requests](#upstream-keepalive-requests) instead.
Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.
_References:_
[https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests](https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests)
## http2-max-concurrent-streams
Sets the maximum number of concurrent HTTP/2 streams in a connection.
_References:_
[https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_concurrent_streams](https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_concurrent_streams)
## hsts
Enables or disables the header HSTS in servers running SSL.
HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.
_References:_
- [https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security](https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security)
- [https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server](https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server)
## hsts-include-subdomains
Enables or disables the use of HSTS in all the subdomains of the server-name.
## hsts-max-age
Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.
## hsts-preload
Enables or disables the preload attribute in the HSTS feature (when it is enabled).
## keep-alive
Sets the time, in seconds, during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.
_References:_
[https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout](https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout)
!!! important
Setting `keep-alive: '0'` will most likely break concurrent http/2 requests due to changes introduced with nginx 1.19.7
```
Changes with nginx 1.19.7 16 Feb 2021
*) Change: connections handling in HTTP/2 has been changed to better
match HTTP/1.x; the "http2_recv_timeout", "http2_idle_timeout", and
"http2_max_requests" directives have been removed, the
"keepalive_timeout" and "keepalive_requests" directives should be
used instead.
```
_References:_
[nginx change log](https://nginx.org/en/CHANGES)
[nginx issue tracker](https://trac.nginx.org/nginx/ticket/2155)
[nginx mailing list](https://mailman.nginx.org/pipermail/nginx/2021-May/060697.html)
## keep-alive-requests
Sets the maximum number of requests that can be served through one keep-alive connection.
_References:_
[https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests](https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests)
## large-client-header-buffers
Sets the maximum number and size of buffers used for reading large client request header. _**default:**_ 4 8k
_References:_
[https://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers](https://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers)
## log-format-escape-none
Sets if the escape parameter is disabled entirely for character escaping in variables ("true") or controlled by log-format-escape-json ("false") Sets the nginx [log format](https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format).
## log-format-escape-json
Sets if the escape parameter allows JSON ("true") or default characters escaping in variables ("false") Sets the nginx [log format](https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format).
## log-format-upstream
Sets the nginx [log format](https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format).
Example for json output:
```json
log-format-upstream: '{"time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr", "x_forwarded_for": "$proxy_add_x_forwarded_for", "request_id": "$req_id",
"remote_user": "$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status": $status, "vhost": "$host", "request_proto": "$server_protocol",
"path": "$uri", "request_query": "$args", "request_length": $request_length, "duration": $request_time,"method": "$request_method", "http_referrer": "$http_referer",
"http_user_agent": "$http_user_agent" }'
```
Please check the [log-format](log-format.md) for definition of each field.
## log-format-stream
Sets the nginx [stream format](https://nginx.org/en/docs/stream/ngx_stream_log_module.html#log_format).
## enable-multi-accept
If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time.
_**default:**_ true
_References:_
[https://nginx.org/en/docs/ngx_core_module.html#multi_accept](https://nginx.org/en/docs/ngx_core_module.html#multi_accept)
## max-worker-connections
Sets the [maximum number of simultaneous connections](https://nginx.org/en/docs/ngx_core_module.html#worker_connections) that can be opened by each worker process.
0 will use the value of [max-worker-open-files](#max-worker-open-files).
_**default:**_ 16384
!!! tip
Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle).
## max-worker-open-files
Sets the [maximum number of files](https://nginx.org/en/docs/ngx_core_module.html#worker_rlimit_nofile) that can be opened by each worker process.
The default of 0 means "max open files (system's limit) - 1024".
_**default:**_ 0
## map-hash-bucket-size
Sets the bucket size for the [map variables hash tables](https://nginx.org/en/docs/http/ngx_http_map_module.html#map_hash_bucket_size). The details of setting up hash tables are provided in a separate [document](https://nginx.org/en/docs/hash.html).
## proxy-real-ip-cidr
If `use-forwarded-headers` or `use-proxy-protocol` is enabled, `proxy-real-ip-cidr` defines the default IP/network address of your external load balancer. Can be a comma-separated list of CIDR blocks.
_**default:**_ "0.0.0.0/0"
## proxy-set-headers
Sets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See [example](https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/)
## server-name-hash-max-size
Sets the maximum size of the [server names hash tables](https://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_max_size) used in server names,map directive’s values, MIME types, names of request header strings, etc.
_References:_
[https://nginx.org/en/docs/hash.html](https://nginx.org/en/docs/hash.html)
## server-name-hash-bucket-size
Sets the size of the bucket for the server names hash tables.
_References:_
- [https://nginx.org/en/docs/hash.html](https://nginx.org/en/docs/hash.html)
- [https://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size](https://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size)
## proxy-headers-hash-max-size
Sets the maximum size of the proxy headers hash tables.
_References:_
- [https://nginx.org/en/docs/hash.html](https://nginx.org/en/docs/hash.html)
- [https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size)
## reuse-port
Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes
_**default:**_ true
## proxy-headers-hash-bucket-size
Sets the size of the bucket for the proxy headers hash tables.
_References:_
- [https://nginx.org/en/docs/hash.html](https://nginx.org/en/docs/hash.html)
- [https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size)
## server-tokens
Send NGINX Server header in responses and display NGINX version in error pages. _**default:**_ is disabled
## ssl-ciphers
Sets the [ciphers](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers) list to enable. The ciphers are specified in the format understood by the OpenSSL library.
The default cipher list is:
`ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384`.
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect [forward secrecy](https://wiki.mozilla.org/Security/Server_Side_TLS#Forward_Secrecy).
DHE-based cyphers will not be available until DH parameter is configured [Custom DH parameters for perfect forward secrecy](https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/ssl-dh-param)
Please check the [Mozilla SSL Configuration Generator](https://mozilla.github.io/server-side-tls/ssl-config-generator/).
__Note:__ ssl_prefer_server_ciphers directive will be enabled by default for http context.
## ssl-ecdh-curve
Specifies a curve for ECDHE ciphers.
_References:_
[https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve)
## ssl-dh-param
Sets the name of the secret that contains Diffie-Hellman key to help with "Perfect Forward Secrecy".
_References:_
- [https://wiki.openssl.org/index.php/Diffie-Hellman_parameters](https://wiki.openssl.org/index.php/Diffie-Hellman_parameters)
- [https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam](https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam)
- [https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam)
## ssl-protocols
Sets the [SSL protocols](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols) to use. The default is: `TLSv1.2 TLSv1.3`.
Please check the result of the configuration using `https://ssllabs.com/ssltest/analyze.html` or `https://testssl.sh`.
## ssl-early-data
Enables or disables TLS 1.3 [early data](https://tools.ietf.org/html/rfc8446#section-2.3), also known as Zero Round Trip
Time Resumption (0-RTT).
This requires `ssl-protocols` to have `TLSv1.3` enabled. Enable this with caution, because requests sent within early
data are subject to [replay attacks](https://tools.ietf.org/html/rfc8470).
[ssl_early_data](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_early_data). The default is: `false`.
## ssl-session-cache
Enables or disables the use of shared [SSL cache](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) among worker processes.
## ssl-session-cache-size
Sets the size of the [SSL shared session cache](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) between all worker processes.
## ssl-session-tickets
Enables or disables session resumption through [TLS session tickets](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_tickets).
## ssl-session-ticket-key
Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string.
To create a ticket: `openssl rand 80 | openssl enc -A -base64`
[TLS session ticket-key](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_tickets), by default, a randomly generated key is used.
## ssl-session-timeout
Sets the time during which a client may [reuse the session](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_timeout) parameters stored in a cache.
## ssl-buffer-size
Sets the size of the [SSL buffer](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).
_References:_
[https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/)
## use-proxy-protocol
Enables or disables the [PROXY protocol](https://www.nginx.com/resources/admin-guide/proxy-protocol/) to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).
## proxy-protocol-header-timeout
Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection.
_**default:**_ 5s
## enable-aio-write
Enables or disables the directive [aio_write](https://nginx.org/en/docs/http/ngx_http_core_module.html#aio_write) that writes files asynchronously. _**default:**_ true
## use-gzip
Enables or disables compression of HTTP responses using the ["gzip" module](https://nginx.org/en/docs/http/ngx_http_gzip_module.html). MIME types to compress are controlled by [gzip-types](#gzip-types). _**default:**_ false
## use-geoip
Enables or disables ["geoip" module](https://nginx.org/en/docs/http/ngx_http_geoip_module.html) that creates variables with values depending on the client IP address, using the precompiled MaxMind databases.
_**default:**_ true
> __Note:__ MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. [discontinuation notice](https://support.maxmind.com/geolite-legacy-discontinuation-notice/). Consider [use-geoip2](#use-geoip2) below.
## use-geoip2
Enables the [geoip2 module](https://github.com/leev/ngx_http_geoip2_module) for NGINX.
Since `0.27.0` and due to a [change in the MaxMind databases](https://blog.maxmind.com/2019/12/significant-changes-to-accessing-and-using-geolite2-databases/) a license is required to have access to the databases.
For this reason, it is required to define a new flag `--maxmind-license-key` in the ingress controller deployment to download the databases needed during the initialization of the ingress controller.
Alternatively, it is possible to use a volume to mount the files `/etc/ingress-controller/geoip/GeoLite2-City.mmdb` and `/etc/ingress-controller/geoip/GeoLite2-ASN.mmdb`, avoiding the overhead of the download.
!!! important
If the feature is enabled but the files are missing, GeoIP2 will not be enabled.
_**default:**_ false
## geoip2-autoreload-in-minutes
Enables the [geoip2 module](https://github.com/leev/ngx_http_geoip2_module) autoreload in MaxMind databases setting the interval in minutes.
_**default:**_ 0
## enable-brotli
Enables or disables compression of HTTP responses using the ["brotli" module](https://github.com/google/ngx_brotli).
The default mime type list to compress is: `application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component`.
_**default:**_ false
> __Note:__ Brotli does not works in Safari < 11. For more information see [https://caniuse.com/#feat=brotli](https://caniuse.com/#feat=brotli)
## brotli-level
Sets the Brotli Compression Level that will be used. _**default:**_ 4
## brotli-min-length
Minimum length of responses, in bytes, that will be eligible for brotli compression. _**default:**_ 20
## brotli-types
Sets the MIME Types that will be compressed on-the-fly by brotli.
_**default:**_ `application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component`
## use-http2
Enables or disables [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) support in secure connections.
## gzip-disable
Disables [gzipping](http://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_disable) of responses for requests with "User-Agent" header fields matching any of the specified regular expressions.
## gzip-level
Sets the gzip Compression Level that will be used. _**default:**_ 1
## gzip-min-length
Minimum length of responses to be returned to the client before it is eligible for gzip compression, in bytes. _**default:**_ 256
## gzip-types
Sets the MIME types in addition to "text/html" to compress. The special value "\*" matches any MIME type. Responses with the "text/html" type are always compressed if [`use-gzip`](#use-gzip) is enabled.
_**default:**_ `application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component`.
## worker-processes
Sets the number of [worker processes](https://nginx.org/en/docs/ngx_core_module.html#worker_processes).
The default of "auto" means number of available CPU cores.
## worker-cpu-affinity
Binds worker processes to the sets of CPUs. [worker_cpu_affinity](https://nginx.org/en/docs/ngx_core_module.html#worker_cpu_affinity).
By default worker processes are not bound to any specific CPUs. The value can be:
- "": empty string indicate no affinity is applied.
- cpumask: e.g. `0001 0010 0100 1000` to bind processes to specific cpus.
- auto: binding worker processes automatically to available CPUs.
## worker-shutdown-timeout
Sets a timeout for Nginx to [wait for worker to gracefully shutdown](https://nginx.org/en/docs/ngx_core_module.html#worker_shutdown_timeout). _**default:**_ "240s"
## load-balance
Sets the algorithm to use for load balancing.
The value can either be:
- round_robin: to use the default round robin loadbalancer
- ewma: to use the Peak EWMA method for routing ([implementation](https://github.com/kubernetes/ingress-nginx/blob/main/rootfs/etc/nginx/lua/balancer/ewma.lua))
The default is `round_robin`.
- To load balance using consistent hashing of IP or other variables, consider the `nginx.ingress.kubernetes.io/upstream-hash-by` annotation.
- To load balance using session cookies, consider the `nginx.ingress.kubernetes.io/affinity` annotation.
_References:_
[https://nginx.org/en/docs/http/load_balancing.html](https://nginx.org/en/docs/http/load_balancing.html)
## variables-hash-bucket-size
Sets the bucket size for the variables hash table.
_References:_
[https://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size](https://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size)
## variables-hash-max-size
Sets the maximum size of the variables hash table.
_References:_
[https://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size](https://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size)
## upstream-keepalive-connections
Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle
keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is
exceeded, the least recently used connections are closed.
_**default:**_ 320
_References:_
[https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive)
## upstream-keepalive-time
Sets the maximum time during which requests can be processed through one keepalive connection.
_**default:**_ "1h"
_References:_
[http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_time](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_time)
## upstream-keepalive-timeout
Sets a timeout during which an idle keepalive connection to an upstream server will stay open.
_**default:**_ 60
_References:_
[https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout)
## upstream-keepalive-requests
Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of
requests is made, the connection is closed.
_**default:**_ 10000
_References:_
[https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_requests](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_requests)
## limit-conn-zone-variable
Sets parameters for a shared memory zone that will keep states for various keys of [limit_conn_zone](https://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn_zone). The default of "$binary_remote_addr" variable’s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.
## proxy-stream-timeout
Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.
_References:_
[https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout](https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout)
## proxy-stream-next-upstream
When a connection to the proxied server cannot be established, determines whether a client connection will be passed to the next server.
_References:_
[https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream](https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream)
## proxy-stream-next-upstream-timeout
Limits the time allowed to pass a connection to the next server. The 0 value turns off this limitation.
_References:_
[https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream_timeout](https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream_timeout)
## proxy-stream-next-upstream-tries
Limits the number of possible tries a request should be passed to the next server. The 0 value turns off this limitation.
_References:_
[https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream_tries](https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream_timeout)
## proxy-stream-responses
Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.
_References:_
[https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses](https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses)
## bind-address
Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.
## use-forwarded-headers
If true, NGINX passes the incoming `X-Forwarded-*` headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.
If false, NGINX ignores incoming `X-Forwarded-*` headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.
## enable-real-ip
`enable-real-ip` enables the configuration of [https://nginx.org/en/docs/http/ngx_http_realip_module.html](https://nginx.org/en/docs/http/ngx_http_realip_module.html). Specific attributes of the module can be configured further by using `forwarded-for-header` and `proxy-real-ip-cidr` settings.
## forwarded-for-header
Sets the header field for identifying the originating IP address of a client. _**default:**_ X-Forwarded-For
## compute-full-forwarded-for
Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.
## proxy-add-original-uri-header
Adds an X-Original-Uri header with the original request URI to the backend request
## generate-request-id
Ensures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request
## jaeger-collector-host
Specifies the host to use when uploading traces. It must be a valid URL.
## jaeger-collector-port
Specifies the port to use when uploading traces. _**default:**_ 6831
## jaeger-endpoint
Specifies the endpoint to use when uploading traces to a collector. This takes priority over `jaeger-collector-host` if both are specified.
## jaeger-service-name
Specifies the service name to use for any traces created. _**default:**_ nginx
## jaeger-propagation-format
Specifies the traceparent/tracestate propagation format. _**default:**_ jaeger
## jaeger-sampler-type
Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. _**default:**_ const
## jaeger-sampler-param
Specifies the argument to be passed to the sampler constructor. Must be a number.
For const this should be 0 to never sample and 1 to always sample. _**default:**_ 1
## jaeger-sampler-host
Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL.
Leave blank to use default value (localhost). _**default:**_ http://127.0.0.1
## jaeger-sampler-port
Specifies the custom remote sampler port to be passed to the sampler constructor. Must be a number. _**default:**_ 5778
## jaeger-trace-context-header-name
Specifies the header name used for passing trace context. _**default:**_ uber-trace-id
## jaeger-debug-header
Specifies the header name used for force sampling. _**default:**_ jaeger-debug-id
## jaeger-baggage-header
Specifies the header name used to submit baggage if there is no root span. _**default:**_ jaeger-baggage
## jaeger-tracer-baggage-header-prefix
Specifies the header prefix used to propagate baggage. _**default:**_ uberctx-
## datadog-collector-host
Specifies the datadog agent host to use when uploading traces. It must be a valid URL.
## datadog-collector-port
Specifies the port to use when uploading traces. _**default:**_ 8126
## datadog-service-name
Specifies the service name to use for any traces created. _**default:**_ nginx
## datadog-environment
Specifies the environment this trace belongs to. _**default:**_ prod
## datadog-operation-name-override
Overrides the operation name to use for any traces crated. _**default:**_ nginx.handle
## datadog-priority-sampling
Specifies to use client-side sampling.
If true disables client-side sampling (thus ignoring `sample_rate`) and enables distributed priority sampling, where traces are sampled based on a combination of user-assigned priorities and configuration from the agent. _**default:**_ true
## datadog-sample-rate
Specifies sample rate for any traces created.
This is effective only when `datadog-priority-sampling` is `false` _**default:**_ 1.0
## enable-opentelemetry
Enables the nginx OpenTelemetry extension. _**default:**_ is disabled
_References:_
[https://github.com/open-telemetry/opentelemetry-cpp-contrib](https://github.com/open-telemetry/opentelemetry-cpp-contrib/tree/main/instrumentation/nginx)
## opentelemetry-operation-name
Specifies a custom name for the server span. _**default:**_ is empty
For example, set to "HTTP $request_method $uri".
## otlp-collector-host
Specifies the host to use when uploading traces. It must be a valid URL.
## otlp-collector-port
Specifies the port to use when uploading traces. _**default:**_ 4317
## otel-service-name
Specifies the service name to use for any traces created. _**default:**_ nginx
## opentelemetry-trust-incoming-span: "true"
Enables or disables using spans from incoming requests as parent for created ones. _**default:**_ true
## otel-sampler-parent-based
Uses sampler implementation which by default will take a sample if parent Activity is sampled. _**default:**_ false
## otel-sampler-ratio
Specifies sample rate for any traces created. _**default:**_ 0.01
## otel-sampler
Specifies the sampler to be used when sampling traces. The available samplers are: AlwaysOff, AlwaysOn, TraceIdRatioBased, remote. _**default:**_ AlwaysOff
## main-snippet
Adds custom configuration to the main section of the nginx configuration.
## http-snippet
Adds custom configuration to the http section of the nginx configuration.
## server-snippet
Adds custom configuration to all the servers in the nginx configuration.
## stream-snippet
Adds custom configuration to the stream section of the nginx configuration.
## location-snippet
Adds custom configuration to all the locations in the nginx configuration.
You can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to [provide your own nginx.tmpl](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/).
## custom-http-errors
Enables which HTTP codes should be passed for processing with the [error_page directive](https://nginx.org/en/docs/http/ngx_http_core_module.html#error_page)
Setting at least one code also enables [proxy_intercept_errors](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors) which are required to process error_page.
Example usage: `custom-http-errors: 404,415`
## proxy-body-size
Sets the maximum allowed size of the client request body.
See NGINX [client_max_body_size](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size).
## proxy-connect-timeout
Sets the timeout for [establishing a connection with a proxied server](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout). It should be noted that this timeout cannot usually exceed 75 seconds.
It will also set the [grpc_connect_timeout](https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_connect_timeout) for gRPC connections.
## proxy-read-timeout
Sets the timeout in seconds for [reading a response from the proxied server](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout). The timeout is set only between two successive read operations, not for the transmission of the whole response.
It will also set the [grpc_read_timeout](https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_read_timeout) for gRPC connections.
## proxy-send-timeout
Sets the timeout in seconds for [transmitting a request to the proxied server](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_send_timeout). The timeout is set only between two successive write operations, not for the transmission of the whole request.
It will also set the [grpc_send_timeout](https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_send_timeout) for gRPC connections.
## proxy-buffers-number
Sets the number of the buffer used for [reading the first part of the response](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers) received from the proxied server. This part usually contains a small response header.
## proxy-buffer-size
Sets the size of the buffer used for [reading the first part of the response](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) received from the proxied server. This part usually contains a small response header.
## proxy-busy-buffers-size
[Limits the total size of buffers that can be busy](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_busy_buffers_size) sending a response to the client while the response is not yet fully read.
## proxy-cookie-path
Sets a text that [should be changed in the path attribute](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_path) of the “Set-Cookie” header fields of a proxied server response.
## proxy-cookie-domain
Sets a text that [should be changed in the domain attribute](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_domain) of the “Set-Cookie” header fields of a proxied server response.
## proxy-next-upstream
Specifies in [which cases](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream) a request should be passed to the next server.
## proxy-next-upstream-timeout
[Limits the time](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream_timeout) in seconds during which a request can be passed to the next server.
## proxy-next-upstream-tries
Limit the number of [possible tries](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream_tries) a request should be passed to the next server.
## proxy-redirect-from
Sets the original text that should be changed in the "Location" and "Refresh" header fields of a proxied server response. _**default:**_ off
_References:_
[https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect)
## proxy-request-buffering
Enables or disables [buffering of a client request body](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering).
## ssl-redirect
Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule).
_**default:**_ "true"
## force-ssl-redirect
Sets the global value of redirects (308) to HTTPS if the server has a default TLS certificate (defined in extra-args).
_**default:**_ "false"
## denylist-source-range
Sets the default denylisted IPs for each `server` block. This can be overwritten by an annotation on an Ingress rule.
See [ngx_http_access_module](https://nginx.org/en/docs/http/ngx_http_access_module.html).
## whitelist-source-range
Sets the default whitelisted IPs for each `server` block. This can be overwritten by an annotation on an Ingress rule.
See [ngx_http_access_module](https://nginx.org/en/docs/http/ngx_http_access_module.html).
## skip-access-log-urls
Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like `/health` or `health-check` that make "complex" reading the logs. _**default:**_ is empty
## limit-rate
Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
_References:_
[https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate](https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate)
## limit-rate-after
Sets the initial amount after which the further transmission of a response to a client will be rate limited.
_References:_
[https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after](https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after)
## lua-shared-dicts
Customize default Lua shared dictionaries or define more. You can use the following syntax to do so:
```
lua-shared-dicts: "<my dict name>: <my dict size>, [<my dict name>: <my dict size>], ..."
```
For example following will set default `certificate_data` dictionary to `100M` and will introduce a new dictionary called
`my_custom_plugin`:
```
lua-shared-dicts: "certificate_data: 100, my_custom_plugin: 5"
```
You can optionally set a size unit to allow for kilobyte-granularity. Allowed units are 'm' or 'k' (case-insensitive), and it defaults to MB if no unit is provided. Here is a similar example, but the `my_custom_plugin` dict is only 512KB.
```
lua-shared-dicts: "certificate_data: 100, my_custom_plugin: 512k"
```
## http-redirect-code
Sets the HTTP status code to be used in redirects.
Supported codes are [301](https://developer.mozilla.org/docs/Web/HTTP/Status/301),[302](https://developer.mozilla.org/docs/Web/HTTP/Status/302),[307](https://developer.mozilla.org/docs/Web/HTTP/Status/307) and [308](https://developer.mozilla.org/docs/Web/HTTP/Status/308)
_**default:**_ 308
> __Why the default code is 308?__
> [RFC 7238](https://tools.ietf.org/html/rfc7238) was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if we send a redirect in methods like POST.
## proxy-buffering
Enables or disables [buffering of responses from the proxied server](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering).
## limit-req-status-code
Sets the [status code to return in response to rejected requests](https://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_status). _**default:**_ 503
## limit-conn-status-code
Sets the [status code to return in response to rejected connections](https://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn_status). _**default:**_ 503
## enable-syslog
Enable [syslog](https://nginx.org/en/docs/syslog.html) feature for access log and error log. _**default:**_ false
## syslog-host
Sets the address of syslog server. The address can be specified as a domain name or IP address.
## syslog-port
Sets the port of syslog server. _**default:**_ 514
## no-tls-redirect-locations
A comma-separated list of locations on which http requests will never get redirected to their https counterpart.
_**default:**_ "/.well-known/acme-challenge"
## global-allowed-response-headers
A comma-separated list of allowed response headers inside the [custom headers annotations](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#custom-headers)
## global-auth-url
A url to an existing service that provides authentication for all the locations.
Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-url`.
Locations that should not get authenticated can be listed using `no-auth-locations` See [no-auth-locations](#no-auth-locations). In addition, each service can be excluded from authentication via annotation `enable-global-auth` set to "false".
_**default:**_ ""
_References:_ [https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#external-authentication](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#external-authentication)
## global-auth-method
A HTTP method to use for an existing service that provides authentication for all the locations.
Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-method`.
_**default:**_ ""
## global-auth-signin
Sets the location of the error page for an existing service that provides authentication for all the locations.
Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-signin`.
_**default:**_ ""
## global-auth-signin-redirect-param
Sets the query parameter in the error page signin URL which contains the original URL of the request that failed authentication.
Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-signin-redirect-param`.
_**default:**_ "rd"
## global-auth-response-headers
Sets the headers to pass to backend once authentication request completes. Applied to all the locations.
Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-response-headers`.
_**default:**_ ""
## global-auth-request-redirect
Sets the X-Auth-Request-Redirect header value. Applied to all the locations.
Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-request-redirect`.
_**default:**_ ""
## global-auth-snippet
Sets a custom snippet to use with external authentication. Applied to all the locations.
Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-snippet`.
_**default:**_ ""
## global-auth-cache-key
Enables caching for global auth requests. Specify a lookup key for auth responses, e.g. `$remote_user$http_authorization`.
## global-auth-cache-duration
Set a caching time for auth responses based on their response codes, e.g. `200 202 30m`. See [proxy_cache_valid](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_valid) for details. You may specify multiple, comma-separated values: `200 202 10m, 401 5m`. defaults to `200 202 401 5m`.
## global-auth-always-set-cookie
Always set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308.
_**default:**_ false
## no-auth-locations
A comma-separated list of locations that should not get authenticated.
_**default:**_ "/.well-known/acme-challenge"
## block-cidrs
A comma-separated list of IP addresses (or subnets), request from which have to be blocked globally.
_References:_
[https://nginx.org/en/docs/http/ngx_http_access_module.html#deny](https://nginx.org/en/docs/http/ngx_http_access_module.html#deny)
## block-user-agents
A comma-separated list of User-Agent, request from which have to be blocked globally.
It's possible to use here full strings and regular expressions. More details about valid patterns can be found at `map` Nginx directive documentation.
_References:_
[https://nginx.org/en/docs/http/ngx_http_map_module.html#map](https://nginx.org/en/docs/http/ngx_http_map_module.html#map)
## block-referers
A comma-separated list of Referers, request from which have to be blocked globally.
It's possible to use here full strings and regular expressions. More details about valid patterns can be found at `map` Nginx directive documentation.
_References:_
[https://nginx.org/en/docs/http/ngx_http_map_module.html#map](https://nginx.org/en/docs/http/ngx_http_map_module.html#map)
## proxy-ssl-location-only
Set if proxy-ssl parameters should be applied only on locations and not on servers.
_**default:**_ is disabled
## default-type
Sets the default MIME type of a response.
_**default:**_ text/html
_References:_
[https://nginx.org/en/docs/http/ngx_http_core_module.html#default_type](https://nginx.org/en/docs/http/ngx_http_core_module.html#default_type)
## service-upstream
Set if the service's Cluster IP and port should be used instead of a list of all endpoints. This can be overwritten by an annotation on an Ingress rule.
_**default:**_ "false"
## ssl-reject-handshake
Set to reject SSL handshake to an unknown virtualhost. This parameter helps to mitigate the fingerprinting using default certificate of ingress.
_**default:**_ "false"
_References:_
[https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_reject_handshake](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_reject_handshake)
## debug-connections
Enables debugging log for selected client connections.
_**default:**_ ""
_References:_
[http://nginx.org/en/docs/ngx_core_module.html#debug_connection](http://nginx.org/en/docs/ngx_core_module.html#debug_connection)
## strict-validate-path-type
Ingress objects contains a field called pathType that defines the proxy behavior. It can be `Exact`, `Prefix` and `ImplementationSpecific`.
When pathType is configured as `Exact` or `Prefix`, there should be a more strict validation, allowing only paths starting with "/" and
containing only alphanumeric characters and "-", "_" and additional "/".
When this option is enabled, the validation will happen on the Admission Webhook, making any Ingress not using pathType `ImplementationSpecific`
and containing invalid characters to be denied.
This means that Ingress objects that rely on paths containing regex characters should use `ImplementationSpecific` pathType.
The cluster admin should establish validation rules using mechanisms like [Open Policy Agent](https://www.openpolicyagent.org/) to
validate that only authorized users can use `ImplementationSpecific` pathType and that only the authorized characters can be used.
_**default:**_ "true"
## grpc-buffer-size-kb
Sets the configuration for the GRPC Buffer Size parameter. If not set it will use the default from NGINX.
_References:_
[https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_buffer_size](https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_buffer_size)
## relative-redirects
Use relative redirects instead of absolute redirects. Absolute redirects are the default in nginx. RFC7231 allows relative redirects since 2014.
Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/relative-redirects`.
_**default:**_ "false"
_References:_
- [https://nginx.org/en/docs/http/ngx_http_core_module.html#absolute_redirect](https://nginx.org/en/docs/http/ngx_http_core_module.html#absolute_redirect)
- [https://datatracker.ietf.org/doc/html/rfc7231#section-7.1.2](https://datatracker.ietf.org/doc/html/rfc7231#section-7.1.2) | ingress nginx | ConfigMaps ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable The ConfigMap API resource stores configuration data as key value pairs The data provides the configurations for system components for the nginx controller In order to overwrite nginx controller configuration values as seen in config go https github com kubernetes ingress nginx blob main internal ingress controller config config go you can add key value pairs to the data section of the config map For Example yaml data map hash bucket size 128 ssl protocols SSLv2 important The key and values in a ConfigMap can only be strings This means that we want a value with boolean values we need to quote the values like true or false Same for numbers like 100 Slice types defined below as string or int can be provided as a comma delimited string Configuration options The following table shows a configuration option s name type and the default value name type default notes add headers add headers string allow backend server header allow backend server header bool false allow cross namespace resources allow cross namespace resources bool false allow snippet annotations allow snippet annotations bool false annotations risk level annotations risk level string High annotation value word blocklist annotation value word blocklist string array hide headers hide headers string array empty access log params access log params string access log path access log path string var log nginx access log http access log path http access log path string stream access log path stream access log path string enable access log for default backend enable access log for default backend bool false error log path error log path string var log nginx error log enable modsecurity enable modsecurity bool false modsecurity snippet modsecurity snippet string enable owasp modsecurity crs enable owasp modsecurity crs bool false client header buffer size client header buffer size string 1k client header timeout client header timeout int 60 client body buffer size client body buffer size string 8k client body timeout client body timeout int 60 disable access log disable access log bool false disable ipv6 disable ipv6 bool false disable ipv6 dns disable ipv6 dns bool false enable underscores in headers enable underscores in headers bool false enable ocsp enable ocsp bool false ignore invalid headers ignore invalid headers bool true retry non idempotent retry non idempotent bool false error log level error log level string notice http2 max field size http2 max field size string DEPRECATED in favour of large client header buffers large client header buffers http2 max header size http2 max header size string DEPRECATED in favour of large client header buffers large client header buffers http2 max requests http2 max requests int 0 DEPRECATED in favour of keepalive requests keepalive requests http2 max concurrent streams http2 max concurrent streams int 128 hsts hsts bool true hsts include subdomains hsts include subdomains bool true hsts max age hsts max age string 31536000 hsts preload hsts preload bool false keep alive keep alive int 75 keep alive requests keep alive requests int 1000 large client header buffers large client header buffers string 4 8k log format escape none log format escape none bool false log format escape json log format escape json bool false log format upstream log format upstream string remote addr remote user time local request status body bytes sent http referer http user agent request length request time proxy upstream name proxy alternative upstream name upstream addr upstream response length upstream response time upstream status req id log format stream log format stream string remote addr time local protocol status bytes sent bytes received session time enable multi accept enable multi accept bool true max worker connections max worker connections int 16384 max worker open files max worker open files int 0 map hash bucket size max hash bucket size int 64 nginx status ipv4 whitelist nginx status ipv4 whitelist string 127 0 0 1 nginx status ipv6 whitelist nginx status ipv6 whitelist string 1 proxy real ip cidr proxy real ip cidr string 0 0 0 0 0 proxy set headers proxy set headers string server name hash max size server name hash max size int 1024 server name hash bucket size server name hash bucket size int size of the processor s cache line proxy headers hash max size proxy headers hash max size int 512 proxy headers hash bucket size proxy headers hash bucket size int 64 reuse port reuse port bool true server tokens server tokens bool false ssl ciphers ssl ciphers string ECDHE ECDSA AES128 GCM SHA256 ECDHE RSA AES128 GCM SHA256 ECDHE ECDSA AES256 GCM SHA384 ECDHE RSA AES256 GCM SHA384 ECDHE ECDSA CHACHA20 POLY1305 ECDHE RSA CHACHA20 POLY1305 DHE RSA AES128 GCM SHA256 DHE RSA AES256 GCM SHA384 ssl ecdh curve ssl ecdh curve string auto ssl dh param ssl dh param string ssl protocols ssl protocols string TLSv1 2 TLSv1 3 ssl session cache ssl session cache bool true ssl session cache size ssl session cache size string 10m ssl session tickets ssl session tickets bool false ssl session ticket key ssl session ticket key string Randomly Generated ssl session timeout ssl session timeout string 10m ssl buffer size ssl buffer size string 4k use proxy protocol use proxy protocol bool false proxy protocol header timeout proxy protocol header timeout string 5s enable aio write enable aio write bool true use gzip use gzip bool false use geoip use geoip bool true use geoip2 use geoip2 bool false geoip2 autoreload in minutes geoip2 autoreload in minutes int 0 enable brotli enable brotli bool false brotli level brotli level int 4 brotli min length brotli min length int 20 brotli types brotli types string application xml rss application atom xml application javascript application x javascript application json application rss xml application vnd ms fontobject application x font ttf application x web app manifest json application xhtml xml application xml font opentype image svg xml image x icon text css text javascript text plain text x component use http2 use http2 bool true gzip disable gzip disable string gzip level gzip level int 1 gzip min length gzip min length int 256 gzip types gzip types string application atom xml application javascript application x javascript application json application rss xml application vnd ms fontobject application x font ttf application x web app manifest json application xhtml xml application xml font opentype image svg xml image x icon text css text javascript text plain text x component worker processes worker processes string Number of CPUs worker cpu affinity worker cpu affinity string worker shutdown timeout worker shutdown timeout string 240s enable serial reloads enable serial reloads bool false load balance load balance string round robin variables hash bucket size variables hash bucket size int 128 variables hash max size variables hash max size int 2048 upstream keepalive connections upstream keepalive connections int 320 upstream keepalive time upstream keepalive time string 1h upstream keepalive timeout upstream keepalive timeout int 60 upstream keepalive requests upstream keepalive requests int 10000 limit conn zone variable limit conn zone variable string binary remote addr proxy stream timeout proxy stream timeout string 600s proxy stream next upstream proxy stream next upstream bool true proxy stream next upstream timeout proxy stream next upstream timeout string 600s proxy stream next upstream tries proxy stream next upstream tries int 3 proxy stream responses proxy stream responses int 1 bind address bind address string use forwarded headers use forwarded headers bool false enable real ip enable real ip bool false forwarded for header forwarded for header string X Forwarded For compute full forwarded for compute full forwarded for bool false proxy add original uri header proxy add original uri header bool false generate request id generate request id bool true jaeger collector host jaeger collector host string jaeger collector port jaeger collector port int 6831 jaeger endpoint jaeger endpoint string jaeger service name jaeger service name string nginx jaeger propagation format jaeger propagation format string jaeger jaeger sampler type jaeger sampler type string const jaeger sampler param jaeger sampler param string 1 jaeger sampler host jaeger sampler host string http 127 0 0 1 jaeger sampler port jaeger sampler port int 5778 jaeger trace context header name jaeger trace context header name string uber trace id jaeger debug header jaeger debug header string uber debug id jaeger baggage header jaeger baggage header string jaeger baggage jaeger trace baggage header prefix jaeger trace baggage header prefix string uberctx datadog collector host datadog collector host string datadog collector port datadog collector port int 8126 datadog service name datadog service name string nginx datadog environment datadog environment string prod datadog operation name override datadog operation name override string nginx handle datadog priority sampling datadog priority sampling bool true datadog sample rate datadog sample rate float 1 0 enable opentelemetry enable opentelemetry bool false opentelemetry trust incoming span opentelemetry trust incoming span bool true opentelemetry operation name opentelemetry operation name string opentelemetry config etc nginx opentelemetry toml string etc nginx opentelemetry toml otlp collector host otlp collector host string otlp collector port otlp collector port int 4317 otel max queuesize otel max queuesize int otel schedule delay millis otel schedule delay millis int otel max export batch size otel max export batch size int otel service name otel service name string nginx otel sampler otel sampler string AlwaysOff otel sampler parent based otel sampler parent based bool false otel sampler ratio otel sampler ratio float 0 01 main snippet main snippet string http snippet http snippet string server snippet server snippet string stream snippet stream snippet string location snippet location snippet string custom http errors custom http errors int int proxy body size proxy body size string 1m proxy connect timeout proxy connect timeout int 5 proxy read timeout proxy read timeout int 60 proxy send timeout proxy send timeout int 60 proxy buffers number proxy buffers number int 4 proxy buffer size proxy buffer size string 4k proxy busy buffers size proxy busy buffers size string 8k proxy cookie path proxy cookie path string off proxy cookie domain proxy cookie domain string off proxy next upstream proxy next upstream string error timeout proxy next upstream timeout proxy next upstream timeout int 0 proxy next upstream tries proxy next upstream tries int 3 proxy redirect from proxy redirect from string off proxy request buffering proxy request buffering string on ssl redirect ssl redirect bool true force ssl redirect force ssl redirect bool false denylist source range denylist source range string string whitelist source range whitelist source range string string skip access log urls skip access log urls string string limit rate limit rate int 0 limit rate after limit rate after int 0 lua shared dicts lua shared dicts string http redirect code http redirect code int 308 proxy buffering proxy buffering string off limit req status code limit req status code int 503 limit conn status code limit conn status code int 503 enable syslog enable syslog bool false syslog host syslog host string syslog port syslog port int 514 no tls redirect locations no tls redirect locations string well known acme challenge global allowed response headers global allowed response headers string global auth url global auth url string global auth method global auth method string global auth signin global auth signin string global auth signin redirect param global auth signin redirect param string rd global auth response headers global auth response headers string global auth request redirect global auth request redirect string global auth snippet global auth snippet string global auth cache key global auth cache key string global auth cache duration global auth cache duration string 200 202 401 5m no auth locations no auth locations string well known acme challenge block cidrs block cidrs string block user agents block user agents string block referers block referers string proxy ssl location only proxy ssl location only bool false default type default type string text html service upstream service upstream bool false ssl reject handshake ssl reject handshake bool false debug connections debug connections string 127 0 0 1 1 1 1 1 24 strict validate path type strict validate path type bool true grpc buffer size kb grpc buffer size kb int 0 relative redirects relative redirects bool false add headers Sets custom headers from named configmap before sending traffic to the client See proxy set headers proxy set headers example https github com kubernetes ingress nginx tree main docs examples customization custom headers allow backend server header Enables the return of the header Server from the backend instead of the generic nginx string default is disabled allow cross namespace resources Enables users to consume cross namespace resource on annotations when was previously enabled default false Annotations that may be impacted with this change auth secret auth proxy set header auth tls secret fastcgi params configmap proxy ssl secret allow snippet annotations Enables Ingress to parse and add snippet annotations directives created by the user default false Warning We recommend enabling this option only if you TRUST users with permission to create Ingress objects as this may allow a user to add restricted configurations to the final nginx conf file annotations risk level Represents the risk accepted on an annotation If the risk is for instance Medium annotations with risk High and Critical will not be accepted Accepted values are Critical High Medium and Low default High annotation value word blocklist Contains a comma separated value of chars words that are well known of being used to abuse Ingress configuration and must be blocked Related to CVE 2021 25742 https github com kubernetes ingress nginx issues 7837 When an annotation is detected with a value that matches one of the blocked bad words the whole Ingress won t be configured default When doing this the default blocklist is override which means that the Ingress admin should add all the words that should be blocked here is a suggested block list suggested load module lua package by lua location root proxy pass serviceaccount hide headers Sets additional header that will not be passed from the upstream server to the client response default empty References https nginx org en docs http ngx http proxy module html proxy hide header https nginx org en docs http ngx http proxy module html proxy hide header access log params Additional params for access log For example buffer 16k gzip flush 1m References https nginx org en docs http ngx http log module html access log https nginx org en docs http ngx http log module html access log access log path Access log path for both http and stream context Goes to var log nginx access log by default Note the file var log nginx access log is a symlink to dev stdout http access log path Access log path for http context globally default Note If not specified the access log path will be used stream access log path Access log path for stream context globally default Note If not specified the access log path will be used enable access log for default backend Enables logging access to default backend default is disabled error log path Error log path Goes to var log nginx error log by default Note the file var log nginx error log is a symlink to dev stderr References https nginx org en docs ngx core module html error log https nginx org en docs ngx core module html error log enable modsecurity Enables the modsecurity module for NGINX default is disabled enable owasp modsecurity crs Enables the OWASP ModSecurity Core Rule Set CRS default is disabled modsecurity snippet Adds custom rules to modsecurity section of nginx configuration client header buffer size Allows to configure a custom buffer size for reading client request header References https nginx org en docs http ngx http core module html client header buffer size https nginx org en docs http ngx http core module html client header buffer size client header timeout Defines a timeout for reading client request header in seconds References https nginx org en docs http ngx http core module html client header timeout https nginx org en docs http ngx http core module html client header timeout client body buffer size Sets buffer size for reading client request body References https nginx org en docs http ngx http core module html client body buffer size https nginx org en docs http ngx http core module html client body buffer size client body timeout Defines a timeout for reading client request body in seconds References https nginx org en docs http ngx http core module html client body timeout https nginx org en docs http ngx http core module html client body timeout disable access log Disables the Access Log from the entire Ingress Controller default false References https nginx org en docs http ngx http log module html access log https nginx org en docs http ngx http log module html access log disable ipv6 Disable listening on IPV6 default false IPv6 listening is enabled disable ipv6 dns Disable IPV6 for nginx DNS resolver default false IPv6 resolving enabled enable underscores in headers Enables underscores in header names default is disabled enable ocsp Enables Online Certificate Status Protocol stapling https en wikipedia org wiki OCSP stapling OCSP support default is disabled ignore invalid headers Set if header fields with invalid names should be ignored default is enabled retry non idempotent Since 1 9 13 NGINX will not retry non idempotent requests POST LOCK PATCH in case of an error in the upstream server The previous behavior can be restored using the value true error log level Configures the logging level of errors Log levels above are listed in the order of increasing severity References https nginx org en docs ngx core module html error log https nginx org en docs ngx core module html error log http2 max field size warning This feature was deprecated in 1 1 3 and will be removed in 1 3 0 Use large client header buffers large client header buffers instead Limits the maximum size of an HPACK compressed request header field References https nginx org en docs http ngx http v2 module html http2 max field size https nginx org en docs http ngx http v2 module html http2 max field size http2 max header size warning This feature was deprecated in 1 1 3 and will be removed in 1 3 0 Use large client header buffers large client header buffers instead Limits the maximum size of the entire request header list after HPACK decompression References https nginx org en docs http ngx http v2 module html http2 max header size https nginx org en docs http ngx http v2 module html http2 max header size http2 max requests warning This feature was deprecated in 1 1 3 and will be removed in 1 3 0 Use upstream keepalive requests upstream keepalive requests instead Sets the maximum number of requests including push requests that can be served through one HTTP 2 connection after which the next client request will lead to connection closing and the need of establishing a new connection References https nginx org en docs http ngx http v2 module html http2 max requests https nginx org en docs http ngx http v2 module html http2 max requests http2 max concurrent streams Sets the maximum number of concurrent HTTP 2 streams in a connection References https nginx org en docs http ngx http v2 module html http2 max concurrent streams https nginx org en docs http ngx http v2 module html http2 max concurrent streams hsts Enables or disables the header HSTS in servers running SSL HTTP Strict Transport Security often abbreviated as HSTS is a security feature HTTP header that tell browsers that it should only be communicated with using HTTPS instead of using HTTP It provides protection against protocol downgrade attacks and cookie theft References https developer mozilla org en US docs Web Security HTTP strict transport security https developer mozilla org en US docs Web Security HTTP strict transport security https blog qualys com securitylabs 2016 03 28 the importance of a proper http strict transport security implementation on your web server https blog qualys com securitylabs 2016 03 28 the importance of a proper http strict transport security implementation on your web server hsts include subdomains Enables or disables the use of HSTS in all the subdomains of the server name hsts max age Sets the time in seconds that the browser should remember that this site is only to be accessed using HTTPS hsts preload Enables or disables the preload attribute in the HSTS feature when it is enabled keep alive Sets the time in seconds during which a keep alive client connection will stay open on the server side The zero value disables keep alive client connections References https nginx org en docs http ngx http core module html keepalive timeout https nginx org en docs http ngx http core module html keepalive timeout important Setting keep alive 0 will most likely break concurrent http 2 requests due to changes introduced with nginx 1 19 7 Changes with nginx 1 19 7 16 Feb 2021 Change connections handling in HTTP 2 has been changed to better match HTTP 1 x the http2 recv timeout http2 idle timeout and http2 max requests directives have been removed the keepalive timeout and keepalive requests directives should be used instead References nginx change log https nginx org en CHANGES nginx issue tracker https trac nginx org nginx ticket 2155 nginx mailing list https mailman nginx org pipermail nginx 2021 May 060697 html keep alive requests Sets the maximum number of requests that can be served through one keep alive connection References https nginx org en docs http ngx http core module html keepalive requests https nginx org en docs http ngx http core module html keepalive requests large client header buffers Sets the maximum number and size of buffers used for reading large client request header default 4 8k References https nginx org en docs http ngx http core module html large client header buffers https nginx org en docs http ngx http core module html large client header buffers log format escape none Sets if the escape parameter is disabled entirely for character escaping in variables true or controlled by log format escape json false Sets the nginx log format https nginx org en docs http ngx http log module html log format log format escape json Sets if the escape parameter allows JSON true or default characters escaping in variables false Sets the nginx log format https nginx org en docs http ngx http log module html log format log format upstream Sets the nginx log format https nginx org en docs http ngx http log module html log format Example for json output json log format upstream time time iso8601 remote addr proxy protocol addr x forwarded for proxy add x forwarded for request id req id remote user remote user bytes sent bytes sent request time request time status status vhost host request proto server protocol path uri request query args request length request length duration request time method request method http referrer http referer http user agent http user agent Please check the log format log format md for definition of each field log format stream Sets the nginx stream format https nginx org en docs stream ngx stream log module html log format enable multi accept If disabled a worker process will accept one new connection at a time Otherwise a worker process will accept all new connections at a time default true References https nginx org en docs ngx core module html multi accept https nginx org en docs ngx core module html multi accept max worker connections Sets the maximum number of simultaneous connections https nginx org en docs ngx core module html worker connections that can be opened by each worker process 0 will use the value of max worker open files max worker open files default 16384 tip Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization even on idle max worker open files Sets the maximum number of files https nginx org en docs ngx core module html worker rlimit nofile that can be opened by each worker process The default of 0 means max open files system s limit 1024 default 0 map hash bucket size Sets the bucket size for the map variables hash tables https nginx org en docs http ngx http map module html map hash bucket size The details of setting up hash tables are provided in a separate document https nginx org en docs hash html proxy real ip cidr If use forwarded headers or use proxy protocol is enabled proxy real ip cidr defines the default IP network address of your external load balancer Can be a comma separated list of CIDR blocks default 0 0 0 0 0 proxy set headers Sets custom headers from named configmap before sending traffic to backends The value format is namespace name See example https kubernetes github io ingress nginx examples customization custom headers server name hash max size Sets the maximum size of the server names hash tables https nginx org en docs http ngx http core module html server names hash max size used in server names map directive s values MIME types names of request header strings etc References https nginx org en docs hash html https nginx org en docs hash html server name hash bucket size Sets the size of the bucket for the server names hash tables References https nginx org en docs hash html https nginx org en docs hash html https nginx org en docs http ngx http core module html server names hash bucket size https nginx org en docs http ngx http core module html server names hash bucket size proxy headers hash max size Sets the maximum size of the proxy headers hash tables References https nginx org en docs hash html https nginx org en docs hash html https nginx org en docs http ngx http proxy module html proxy headers hash max size https nginx org en docs http ngx http proxy module html proxy headers hash max size reuse port Instructs NGINX to create an individual listening socket for each worker process using the SO REUSEPORT socket option allowing a kernel to distribute incoming connections between worker processes default true proxy headers hash bucket size Sets the size of the bucket for the proxy headers hash tables References https nginx org en docs hash html https nginx org en docs hash html https nginx org en docs http ngx http proxy module html proxy headers hash bucket size https nginx org en docs http ngx http proxy module html proxy headers hash bucket size server tokens Send NGINX Server header in responses and display NGINX version in error pages default is disabled ssl ciphers Sets the ciphers https nginx org en docs http ngx http ssl module html ssl ciphers list to enable The ciphers are specified in the format understood by the OpenSSL library The default cipher list is ECDHE ECDSA AES128 GCM SHA256 ECDHE RSA AES128 GCM SHA256 ECDHE ECDSA AES256 GCM SHA384 ECDHE RSA AES256 GCM SHA384 ECDHE ECDSA CHACHA20 POLY1305 ECDHE RSA CHACHA20 POLY1305 DHE RSA AES128 GCM SHA256 DHE RSA AES256 GCM SHA384 The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority The recommendation above prioritizes algorithms that provide perfect forward secrecy https wiki mozilla org Security Server Side TLS Forward Secrecy DHE based cyphers will not be available until DH parameter is configured Custom DH parameters for perfect forward secrecy https github com kubernetes ingress nginx tree main docs examples customization ssl dh param Please check the Mozilla SSL Configuration Generator https mozilla github io server side tls ssl config generator Note ssl prefer server ciphers directive will be enabled by default for http context ssl ecdh curve Specifies a curve for ECDHE ciphers References https nginx org en docs http ngx http ssl module html ssl ecdh curve https nginx org en docs http ngx http ssl module html ssl ecdh curve ssl dh param Sets the name of the secret that contains Diffie Hellman key to help with Perfect Forward Secrecy References https wiki openssl org index php Diffie Hellman parameters https wiki openssl org index php Diffie Hellman parameters https wiki mozilla org Security Server Side TLS DHE handshake and dhparam https wiki mozilla org Security Server Side TLS DHE handshake and dhparam https nginx org en docs http ngx http ssl module html ssl dhparam https nginx org en docs http ngx http ssl module html ssl dhparam ssl protocols Sets the SSL protocols https nginx org en docs http ngx http ssl module html ssl protocols to use The default is TLSv1 2 TLSv1 3 Please check the result of the configuration using https ssllabs com ssltest analyze html or https testssl sh ssl early data Enables or disables TLS 1 3 early data https tools ietf org html rfc8446 section 2 3 also known as Zero Round Trip Time Resumption 0 RTT This requires ssl protocols to have TLSv1 3 enabled Enable this with caution because requests sent within early data are subject to replay attacks https tools ietf org html rfc8470 ssl early data https nginx org en docs http ngx http ssl module html ssl early data The default is false ssl session cache Enables or disables the use of shared SSL cache https nginx org en docs http ngx http ssl module html ssl session cache among worker processes ssl session cache size Sets the size of the SSL shared session cache https nginx org en docs http ngx http ssl module html ssl session cache between all worker processes ssl session tickets Enables or disables session resumption through TLS session tickets https nginx org en docs http ngx http ssl module html ssl session tickets ssl session ticket key Sets the secret key used to encrypt and decrypt TLS session tickets The value must be a valid base64 string To create a ticket openssl rand 80 openssl enc A base64 TLS session ticket key https nginx org en docs http ngx http ssl module html ssl session tickets by default a randomly generated key is used ssl session timeout Sets the time during which a client may reuse the session https nginx org en docs http ngx http ssl module html ssl session timeout parameters stored in a cache ssl buffer size Sets the size of the SSL buffer https nginx org en docs http ngx http ssl module html ssl buffer size used for sending data The default of 4k helps NGINX to improve TLS Time To First Byte TTTFB References https www igvita com 2013 12 16 optimizing nginx tls time to first byte https www igvita com 2013 12 16 optimizing nginx tls time to first byte use proxy protocol Enables or disables the PROXY protocol https www nginx com resources admin guide proxy protocol to receive client connection real IP address information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer ELB proxy protocol header timeout Sets the timeout value for receiving the proxy protocol headers The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection default 5s enable aio write Enables or disables the directive aio write https nginx org en docs http ngx http core module html aio write that writes files asynchronously default true use gzip Enables or disables compression of HTTP responses using the gzip module https nginx org en docs http ngx http gzip module html MIME types to compress are controlled by gzip types gzip types default false use geoip Enables or disables geoip module https nginx org en docs http ngx http geoip module html that creates variables with values depending on the client IP address using the precompiled MaxMind databases default true Note MaxMind legacy databases are discontinued and will not receive updates after 2019 01 02 cf discontinuation notice https support maxmind com geolite legacy discontinuation notice Consider use geoip2 use geoip2 below use geoip2 Enables the geoip2 module https github com leev ngx http geoip2 module for NGINX Since 0 27 0 and due to a change in the MaxMind databases https blog maxmind com 2019 12 significant changes to accessing and using geolite2 databases a license is required to have access to the databases For this reason it is required to define a new flag maxmind license key in the ingress controller deployment to download the databases needed during the initialization of the ingress controller Alternatively it is possible to use a volume to mount the files etc ingress controller geoip GeoLite2 City mmdb and etc ingress controller geoip GeoLite2 ASN mmdb avoiding the overhead of the download important If the feature is enabled but the files are missing GeoIP2 will not be enabled default false geoip2 autoreload in minutes Enables the geoip2 module https github com leev ngx http geoip2 module autoreload in MaxMind databases setting the interval in minutes default 0 enable brotli Enables or disables compression of HTTP responses using the brotli module https github com google ngx brotli The default mime type list to compress is application xml rss application atom xml application javascript application x javascript application json application rss xml application vnd ms fontobject application x font ttf application x web app manifest json application xhtml xml application xml font opentype image svg xml image x icon text css text plain text x component default false Note Brotli does not works in Safari 11 For more information see https caniuse com feat brotli https caniuse com feat brotli brotli level Sets the Brotli Compression Level that will be used default 4 brotli min length Minimum length of responses in bytes that will be eligible for brotli compression default 20 brotli types Sets the MIME Types that will be compressed on the fly by brotli default application xml rss application atom xml application javascript application x javascript application json application rss xml application vnd ms fontobject application x font ttf application x web app manifest json application xhtml xml application xml font opentype image svg xml image x icon text css text plain text x component use http2 Enables or disables HTTP 2 https nginx org en docs http ngx http v2 module html support in secure connections gzip disable Disables gzipping http nginx org en docs http ngx http gzip module html gzip disable of responses for requests with User Agent header fields matching any of the specified regular expressions gzip level Sets the gzip Compression Level that will be used default 1 gzip min length Minimum length of responses to be returned to the client before it is eligible for gzip compression in bytes default 256 gzip types Sets the MIME types in addition to text html to compress The special value matches any MIME type Responses with the text html type are always compressed if use gzip use gzip is enabled default application atom xml application javascript application x javascript application json application rss xml application vnd ms fontobject application x font ttf application x web app manifest json application xhtml xml application xml font opentype image svg xml image x icon text css text plain text x component worker processes Sets the number of worker processes https nginx org en docs ngx core module html worker processes The default of auto means number of available CPU cores worker cpu affinity Binds worker processes to the sets of CPUs worker cpu affinity https nginx org en docs ngx core module html worker cpu affinity By default worker processes are not bound to any specific CPUs The value can be empty string indicate no affinity is applied cpumask e g 0001 0010 0100 1000 to bind processes to specific cpus auto binding worker processes automatically to available CPUs worker shutdown timeout Sets a timeout for Nginx to wait for worker to gracefully shutdown https nginx org en docs ngx core module html worker shutdown timeout default 240s load balance Sets the algorithm to use for load balancing The value can either be round robin to use the default round robin loadbalancer ewma to use the Peak EWMA method for routing implementation https github com kubernetes ingress nginx blob main rootfs etc nginx lua balancer ewma lua The default is round robin To load balance using consistent hashing of IP or other variables consider the nginx ingress kubernetes io upstream hash by annotation To load balance using session cookies consider the nginx ingress kubernetes io affinity annotation References https nginx org en docs http load balancing html https nginx org en docs http load balancing html variables hash bucket size Sets the bucket size for the variables hash table References https nginx org en docs http ngx http map module html variables hash bucket size https nginx org en docs http ngx http map module html variables hash bucket size variables hash max size Sets the maximum size of the variables hash table References https nginx org en docs http ngx http map module html variables hash max size https nginx org en docs http ngx http map module html variables hash max size upstream keepalive connections Activates the cache for connections to upstream servers The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process When this number is exceeded the least recently used connections are closed default 320 References https nginx org en docs http ngx http upstream module html keepalive https nginx org en docs http ngx http upstream module html keepalive upstream keepalive time Sets the maximum time during which requests can be processed through one keepalive connection default 1h References http nginx org en docs http ngx http upstream module html keepalive time http nginx org en docs http ngx http upstream module html keepalive time upstream keepalive timeout Sets a timeout during which an idle keepalive connection to an upstream server will stay open default 60 References https nginx org en docs http ngx http upstream module html keepalive timeout https nginx org en docs http ngx http upstream module html keepalive timeout upstream keepalive requests Sets the maximum number of requests that can be served through one keepalive connection After the maximum number of requests is made the connection is closed default 10000 References https nginx org en docs http ngx http upstream module html keepalive requests https nginx org en docs http ngx http upstream module html keepalive requests limit conn zone variable Sets parameters for a shared memory zone that will keep states for various keys of limit conn zone https nginx org en docs http ngx http limit conn module html limit conn zone The default of binary remote addr variable s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses proxy stream timeout Sets the timeout between two successive read or write operations on client or proxied server connections If no data is transmitted within this time the connection is closed References https nginx org en docs stream ngx stream proxy module html proxy timeout https nginx org en docs stream ngx stream proxy module html proxy timeout proxy stream next upstream When a connection to the proxied server cannot be established determines whether a client connection will be passed to the next server References https nginx org en docs stream ngx stream proxy module html proxy next upstream https nginx org en docs stream ngx stream proxy module html proxy next upstream proxy stream next upstream timeout Limits the time allowed to pass a connection to the next server The 0 value turns off this limitation References https nginx org en docs stream ngx stream proxy module html proxy next upstream timeout https nginx org en docs stream ngx stream proxy module html proxy next upstream timeout proxy stream next upstream tries Limits the number of possible tries a request should be passed to the next server The 0 value turns off this limitation References https nginx org en docs stream ngx stream proxy module html proxy next upstream tries https nginx org en docs stream ngx stream proxy module html proxy next upstream timeout proxy stream responses Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used References https nginx org en docs stream ngx stream proxy module html proxy responses https nginx org en docs stream ngx stream proxy module html proxy responses bind address Sets the addresses on which the server will accept requests instead of It should be noted that these addresses must exist in the runtime environment or the controller will crash loop use forwarded headers If true NGINX passes the incoming X Forwarded headers to upstreams Use this option when NGINX is behind another L7 proxy load balancer that is setting these headers If false NGINX ignores incoming X Forwarded headers filling them with the request information it sees Use this option if NGINX is exposed directly to the internet or it s behind a L3 packet based load balancer that doesn t alter the source IP in the packets enable real ip enable real ip enables the configuration of https nginx org en docs http ngx http realip module html https nginx org en docs http ngx http realip module html Specific attributes of the module can be configured further by using forwarded for header and proxy real ip cidr settings forwarded for header Sets the header field for identifying the originating IP address of a client default X Forwarded For compute full forwarded for Append the remote address to the X Forwarded For header instead of replacing it When this option is enabled the upstream application is responsible for extracting the client IP based on its own list of trusted proxies proxy add original uri header Adds an X Original Uri header with the original request URI to the backend request generate request id Ensures that X Request ID is defaulted to a random value if no X Request ID is present in the request jaeger collector host Specifies the host to use when uploading traces It must be a valid URL jaeger collector port Specifies the port to use when uploading traces default 6831 jaeger endpoint Specifies the endpoint to use when uploading traces to a collector This takes priority over jaeger collector host if both are specified jaeger service name Specifies the service name to use for any traces created default nginx jaeger propagation format Specifies the traceparent tracestate propagation format default jaeger jaeger sampler type Specifies the sampler to be used when sampling traces The available samplers are const probabilistic ratelimiting remote default const jaeger sampler param Specifies the argument to be passed to the sampler constructor Must be a number For const this should be 0 to never sample and 1 to always sample default 1 jaeger sampler host Specifies the custom remote sampler host to be passed to the sampler constructor Must be a valid URL Leave blank to use default value localhost default http 127 0 0 1 jaeger sampler port Specifies the custom remote sampler port to be passed to the sampler constructor Must be a number default 5778 jaeger trace context header name Specifies the header name used for passing trace context default uber trace id jaeger debug header Specifies the header name used for force sampling default jaeger debug id jaeger baggage header Specifies the header name used to submit baggage if there is no root span default jaeger baggage jaeger tracer baggage header prefix Specifies the header prefix used to propagate baggage default uberctx datadog collector host Specifies the datadog agent host to use when uploading traces It must be a valid URL datadog collector port Specifies the port to use when uploading traces default 8126 datadog service name Specifies the service name to use for any traces created default nginx datadog environment Specifies the environment this trace belongs to default prod datadog operation name override Overrides the operation name to use for any traces crated default nginx handle datadog priority sampling Specifies to use client side sampling If true disables client side sampling thus ignoring sample rate and enables distributed priority sampling where traces are sampled based on a combination of user assigned priorities and configuration from the agent default true datadog sample rate Specifies sample rate for any traces created This is effective only when datadog priority sampling is false default 1 0 enable opentelemetry Enables the nginx OpenTelemetry extension default is disabled References https github com open telemetry opentelemetry cpp contrib https github com open telemetry opentelemetry cpp contrib tree main instrumentation nginx opentelemetry operation name Specifies a custom name for the server span default is empty For example set to HTTP request method uri otlp collector host Specifies the host to use when uploading traces It must be a valid URL otlp collector port Specifies the port to use when uploading traces default 4317 otel service name Specifies the service name to use for any traces created default nginx opentelemetry trust incoming span true Enables or disables using spans from incoming requests as parent for created ones default true otel sampler parent based Uses sampler implementation which by default will take a sample if parent Activity is sampled default false otel sampler ratio Specifies sample rate for any traces created default 0 01 otel sampler Specifies the sampler to be used when sampling traces The available samplers are AlwaysOff AlwaysOn TraceIdRatioBased remote default AlwaysOff main snippet Adds custom configuration to the main section of the nginx configuration http snippet Adds custom configuration to the http section of the nginx configuration server snippet Adds custom configuration to all the servers in the nginx configuration stream snippet Adds custom configuration to the stream section of the nginx configuration location snippet Adds custom configuration to all the locations in the nginx configuration You can not use this to add new locations that proxy to the Kubernetes pods as the snippet does not have access to the Go template functions If you want to add custom locations you will have to provide your own nginx tmpl https kubernetes github io ingress nginx user guide nginx configuration custom template custom http errors Enables which HTTP codes should be passed for processing with the error page directive https nginx org en docs http ngx http core module html error page Setting at least one code also enables proxy intercept errors https nginx org en docs http ngx http proxy module html proxy intercept errors which are required to process error page Example usage custom http errors 404 415 proxy body size Sets the maximum allowed size of the client request body See NGINX client max body size https nginx org en docs http ngx http core module html client max body size proxy connect timeout Sets the timeout for establishing a connection with a proxied server https nginx org en docs http ngx http proxy module html proxy connect timeout It should be noted that this timeout cannot usually exceed 75 seconds It will also set the grpc connect timeout https nginx org en docs http ngx http grpc module html grpc connect timeout for gRPC connections proxy read timeout Sets the timeout in seconds for reading a response from the proxied server https nginx org en docs http ngx http proxy module html proxy read timeout The timeout is set only between two successive read operations not for the transmission of the whole response It will also set the grpc read timeout https nginx org en docs http ngx http grpc module html grpc read timeout for gRPC connections proxy send timeout Sets the timeout in seconds for transmitting a request to the proxied server https nginx org en docs http ngx http proxy module html proxy send timeout The timeout is set only between two successive write operations not for the transmission of the whole request It will also set the grpc send timeout https nginx org en docs http ngx http grpc module html grpc send timeout for gRPC connections proxy buffers number Sets the number of the buffer used for reading the first part of the response https nginx org en docs http ngx http proxy module html proxy buffers received from the proxied server This part usually contains a small response header proxy buffer size Sets the size of the buffer used for reading the first part of the response https nginx org en docs http ngx http proxy module html proxy buffer size received from the proxied server This part usually contains a small response header proxy busy buffers size Limits the total size of buffers that can be busy https nginx org en docs http ngx http proxy module html proxy busy buffers size sending a response to the client while the response is not yet fully read proxy cookie path Sets a text that should be changed in the path attribute https nginx org en docs http ngx http proxy module html proxy cookie path of the Set Cookie header fields of a proxied server response proxy cookie domain Sets a text that should be changed in the domain attribute https nginx org en docs http ngx http proxy module html proxy cookie domain of the Set Cookie header fields of a proxied server response proxy next upstream Specifies in which cases https nginx org en docs http ngx http proxy module html proxy next upstream a request should be passed to the next server proxy next upstream timeout Limits the time https nginx org en docs http ngx http proxy module html proxy next upstream timeout in seconds during which a request can be passed to the next server proxy next upstream tries Limit the number of possible tries https nginx org en docs http ngx http proxy module html proxy next upstream tries a request should be passed to the next server proxy redirect from Sets the original text that should be changed in the Location and Refresh header fields of a proxied server response default off References https nginx org en docs http ngx http proxy module html proxy redirect https nginx org en docs http ngx http proxy module html proxy redirect proxy request buffering Enables or disables buffering of a client request body https nginx org en docs http ngx http proxy module html proxy request buffering ssl redirect Sets the global value of redirects 301 to HTTPS if the server has a TLS certificate defined in an Ingress rule default true force ssl redirect Sets the global value of redirects 308 to HTTPS if the server has a default TLS certificate defined in extra args default false denylist source range Sets the default denylisted IPs for each server block This can be overwritten by an annotation on an Ingress rule See ngx http access module https nginx org en docs http ngx http access module html whitelist source range Sets the default whitelisted IPs for each server block This can be overwritten by an annotation on an Ingress rule See ngx http access module https nginx org en docs http ngx http access module html skip access log urls Sets a list of URLs that should not appear in the NGINX access log This is useful with urls like health or health check that make complex reading the logs default is empty limit rate Limits the rate of response transmission to a client The rate is specified in bytes per second The zero value disables rate limiting The limit is set per a request and so if a client simultaneously opens two connections the overall rate will be twice as much as the specified limit References https nginx org en docs http ngx http core module html limit rate https nginx org en docs http ngx http core module html limit rate limit rate after Sets the initial amount after which the further transmission of a response to a client will be rate limited References https nginx org en docs http ngx http core module html limit rate after https nginx org en docs http ngx http core module html limit rate after lua shared dicts Customize default Lua shared dictionaries or define more You can use the following syntax to do so lua shared dicts my dict name my dict size my dict name my dict size For example following will set default certificate data dictionary to 100M and will introduce a new dictionary called my custom plugin lua shared dicts certificate data 100 my custom plugin 5 You can optionally set a size unit to allow for kilobyte granularity Allowed units are m or k case insensitive and it defaults to MB if no unit is provided Here is a similar example but the my custom plugin dict is only 512KB lua shared dicts certificate data 100 my custom plugin 512k http redirect code Sets the HTTP status code to be used in redirects Supported codes are 301 https developer mozilla org docs Web HTTP Status 301 302 https developer mozilla org docs Web HTTP Status 302 307 https developer mozilla org docs Web HTTP Status 307 and 308 https developer mozilla org docs Web HTTP Status 308 default 308 Why the default code is 308 RFC 7238 https tools ietf org html rfc7238 was created to define the 308 Permanent Redirect status code that is similar to 301 Moved Permanently but it keeps the payload in the redirect This is important if we send a redirect in methods like POST proxy buffering Enables or disables buffering of responses from the proxied server https nginx org en docs http ngx http proxy module html proxy buffering limit req status code Sets the status code to return in response to rejected requests https nginx org en docs http ngx http limit req module html limit req status default 503 limit conn status code Sets the status code to return in response to rejected connections https nginx org en docs http ngx http limit conn module html limit conn status default 503 enable syslog Enable syslog https nginx org en docs syslog html feature for access log and error log default false syslog host Sets the address of syslog server The address can be specified as a domain name or IP address syslog port Sets the port of syslog server default 514 no tls redirect locations A comma separated list of locations on which http requests will never get redirected to their https counterpart default well known acme challenge global allowed response headers A comma separated list of allowed response headers inside the custom headers annotations https github com kubernetes ingress nginx blob main docs user guide nginx configuration annotations md custom headers global auth url A url to an existing service that provides authentication for all the locations Similar to the Ingress rule annotation nginx ingress kubernetes io auth url Locations that should not get authenticated can be listed using no auth locations See no auth locations no auth locations In addition each service can be excluded from authentication via annotation enable global auth set to false default References https github com kubernetes ingress nginx blob main docs user guide nginx configuration annotations md external authentication https github com kubernetes ingress nginx blob main docs user guide nginx configuration annotations md external authentication global auth method A HTTP method to use for an existing service that provides authentication for all the locations Similar to the Ingress rule annotation nginx ingress kubernetes io auth method default global auth signin Sets the location of the error page for an existing service that provides authentication for all the locations Similar to the Ingress rule annotation nginx ingress kubernetes io auth signin default global auth signin redirect param Sets the query parameter in the error page signin URL which contains the original URL of the request that failed authentication Similar to the Ingress rule annotation nginx ingress kubernetes io auth signin redirect param default rd global auth response headers Sets the headers to pass to backend once authentication request completes Applied to all the locations Similar to the Ingress rule annotation nginx ingress kubernetes io auth response headers default global auth request redirect Sets the X Auth Request Redirect header value Applied to all the locations Similar to the Ingress rule annotation nginx ingress kubernetes io auth request redirect default global auth snippet Sets a custom snippet to use with external authentication Applied to all the locations Similar to the Ingress rule annotation nginx ingress kubernetes io auth snippet default global auth cache key Enables caching for global auth requests Specify a lookup key for auth responses e g remote user http authorization global auth cache duration Set a caching time for auth responses based on their response codes e g 200 202 30m See proxy cache valid https nginx org en docs http ngx http proxy module html proxy cache valid for details You may specify multiple comma separated values 200 202 10m 401 5m defaults to 200 202 401 5m global auth always set cookie Always set a cookie returned by auth request By default the cookie will be set only if an upstream reports with the code 200 201 204 206 301 302 303 304 307 or 308 default false no auth locations A comma separated list of locations that should not get authenticated default well known acme challenge block cidrs A comma separated list of IP addresses or subnets request from which have to be blocked globally References https nginx org en docs http ngx http access module html deny https nginx org en docs http ngx http access module html deny block user agents A comma separated list of User Agent request from which have to be blocked globally It s possible to use here full strings and regular expressions More details about valid patterns can be found at map Nginx directive documentation References https nginx org en docs http ngx http map module html map https nginx org en docs http ngx http map module html map block referers A comma separated list of Referers request from which have to be blocked globally It s possible to use here full strings and regular expressions More details about valid patterns can be found at map Nginx directive documentation References https nginx org en docs http ngx http map module html map https nginx org en docs http ngx http map module html map proxy ssl location only Set if proxy ssl parameters should be applied only on locations and not on servers default is disabled default type Sets the default MIME type of a response default text html References https nginx org en docs http ngx http core module html default type https nginx org en docs http ngx http core module html default type service upstream Set if the service s Cluster IP and port should be used instead of a list of all endpoints This can be overwritten by an annotation on an Ingress rule default false ssl reject handshake Set to reject SSL handshake to an unknown virtualhost This parameter helps to mitigate the fingerprinting using default certificate of ingress default false References https nginx org en docs http ngx http ssl module html ssl reject handshake https nginx org en docs http ngx http ssl module html ssl reject handshake debug connections Enables debugging log for selected client connections default References http nginx org en docs ngx core module html debug connection http nginx org en docs ngx core module html debug connection strict validate path type Ingress objects contains a field called pathType that defines the proxy behavior It can be Exact Prefix and ImplementationSpecific When pathType is configured as Exact or Prefix there should be a more strict validation allowing only paths starting with and containing only alphanumeric characters and and additional When this option is enabled the validation will happen on the Admission Webhook making any Ingress not using pathType ImplementationSpecific and containing invalid characters to be denied This means that Ingress objects that rely on paths containing regex characters should use ImplementationSpecific pathType The cluster admin should establish validation rules using mechanisms like Open Policy Agent https www openpolicyagent org to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used default true grpc buffer size kb Sets the configuration for the GRPC Buffer Size parameter If not set it will use the default from NGINX References https nginx org en docs http ngx http grpc module html grpc buffer size https nginx org en docs http ngx http grpc module html grpc buffer size relative redirects Use relative redirects instead of absolute redirects Absolute redirects are the default in nginx RFC7231 allows relative redirects since 2014 Similar to the Ingress rule annotation nginx ingress kubernetes io relative redirects default false References https nginx org en docs http ngx http core module html absolute redirect https nginx org en docs http ngx http core module html absolute redirect https datatracker ietf org doc html rfc7231 section 7 1 2 https datatracker ietf org doc html rfc7231 section 7 1 2 |
ingress nginx BasicDigestAuth auth realm Medium location BasicDigestAuth auth secret Medium location Aliases server alias High ingress BasicDigestAuth auth secret type Low location Allowlist allowlist source range Medium location Group Annotation Risk Scope BackendProtocol backend protocol Low location Annotations Scope and Risk | # Annotations Scope and Risk
|Group |Annotation | Risk | Scope |
|--------|------------------|------|-------|
| Aliases | server-alias | High | ingress |
| Allowlist | allowlist-source-range | Medium | location |
| BackendProtocol | backend-protocol | Low | location |
| BasicDigestAuth | auth-realm | Medium | location |
| BasicDigestAuth | auth-secret | Medium | location |
| BasicDigestAuth | auth-secret-type | Low | location |
| BasicDigestAuth | auth-type | Low | location |
| Canary | canary | Low | ingress |
| Canary | canary-by-cookie | Medium | ingress |
| Canary | canary-by-header | Medium | ingress |
| Canary | canary-by-header-pattern | Medium | ingress |
| Canary | canary-by-header-value | Medium | ingress |
| Canary | canary-weight | Low | ingress |
| Canary | canary-weight-total | Low | ingress |
| CertificateAuth | auth-tls-error-page | High | location |
| CertificateAuth | auth-tls-match-cn | High | location |
| CertificateAuth | auth-tls-pass-certificate-to-upstream | Low | location |
| CertificateAuth | auth-tls-secret | Medium | location |
| CertificateAuth | auth-tls-verify-client | Medium | location |
| CertificateAuth | auth-tls-verify-depth | Low | location |
| ClientBodyBufferSize | client-body-buffer-size | Low | location |
| ConfigurationSnippet | configuration-snippet | Critical | location |
| Connection | connection-proxy-header | Low | location |
| CorsConfig | cors-allow-credentials | Low | ingress |
| CorsConfig | cors-allow-headers | Medium | ingress |
| CorsConfig | cors-allow-methods | Medium | ingress |
| CorsConfig | cors-allow-origin | Medium | ingress |
| CorsConfig | cors-expose-headers | Medium | ingress |
| CorsConfig | cors-max-age | Low | ingress |
| CorsConfig | enable-cors | Low | ingress |
| CustomHTTPErrors | custom-http-errors | Low | location |
| CustomHeaders | custom-headers | Medium | location |
| DefaultBackend | default-backend | Low | location |
| Denylist | denylist-source-range | Medium | location |
| DisableProxyInterceptErrors | disable-proxy-intercept-errors | Low | location |
| EnableGlobalAuth | enable-global-auth | Low | location |
| ExternalAuth | auth-always-set-cookie | Low | location |
| ExternalAuth | auth-cache-duration | Medium | location |
| ExternalAuth | auth-cache-key | Medium | location |
| ExternalAuth | auth-keepalive | Low | location |
| ExternalAuth | auth-keepalive-requests | Low | location |
| ExternalAuth | auth-keepalive-share-vars | Low | location |
| ExternalAuth | auth-keepalive-timeout | Low | location |
| ExternalAuth | auth-method | Low | location |
| ExternalAuth | auth-proxy-set-headers | Medium | location |
| ExternalAuth | auth-request-redirect | Medium | location |
| ExternalAuth | auth-response-headers | Medium | location |
| ExternalAuth | auth-signin | High | location |
| ExternalAuth | auth-signin-redirect-param | Medium | location |
| ExternalAuth | auth-snippet | Critical | location |
| ExternalAuth | auth-url | High | location |
| FastCGI | fastcgi-index | Medium | location |
| FastCGI | fastcgi-params-configmap | Medium | location |
| HTTP2PushPreload | http2-push-preload | Low | location |
| LoadBalancing | load-balance | Low | location |
| Logs | enable-access-log | Low | location |
| Logs | enable-rewrite-log | Low | location |
| Mirror | mirror-host | High | ingress |
| Mirror | mirror-request-body | Low | ingress |
| Mirror | mirror-target | High | ingress |
| ModSecurity | enable-modsecurity | Low | ingress |
| ModSecurity | enable-owasp-core-rules | Low | ingress |
| ModSecurity | modsecurity-snippet | Critical | ingress |
| ModSecurity | modsecurity-transaction-id | High | ingress |
| Opentelemetry | enable-opentelemetry | Low | location |
| Opentelemetry | opentelemetry-operation-name | Medium | location |
| Opentelemetry | opentelemetry-trust-incoming-span | Low | location |
| Proxy | proxy-body-size | Medium | location |
| Proxy | proxy-buffer-size | Low | location |
| Proxy | proxy-buffering | Low | location |
| Proxy | proxy-buffers-number | Low | location |
| Proxy | proxy-busy-buffers-size | Low | location |
| Proxy | proxy-connect-timeout | Low | location |
| Proxy | proxy-cookie-domain | Medium | location |
| Proxy | proxy-cookie-path | Medium | location |
| Proxy | proxy-http-version | Low | location |
| Proxy | proxy-max-temp-file-size | Low | location |
| Proxy | proxy-next-upstream | Medium | location |
| Proxy | proxy-next-upstream-timeout | Low | location |
| Proxy | proxy-next-upstream-tries | Low | location |
| Proxy | proxy-read-timeout | Low | location |
| Proxy | proxy-redirect-from | Medium | location |
| Proxy | proxy-redirect-to | Medium | location |
| Proxy | proxy-request-buffering | Low | location |
| Proxy | proxy-send-timeout | Low | location |
| ProxySSL | proxy-ssl-ciphers | Medium | ingress |
| ProxySSL | proxy-ssl-name | High | ingress |
| ProxySSL | proxy-ssl-protocols | Low | ingress |
| ProxySSL | proxy-ssl-secret | Medium | ingress |
| ProxySSL | proxy-ssl-server-name | Low | ingress |
| ProxySSL | proxy-ssl-verify | Low | ingress |
| ProxySSL | proxy-ssl-verify-depth | Low | ingress |
| RateLimit | limit-allowlist | Low | location |
| RateLimit | limit-burst-multiplier | Low | location |
| RateLimit | limit-connections | Low | location |
| RateLimit | limit-rate | Low | location |
| RateLimit | limit-rate-after | Low | location |
| RateLimit | limit-rpm | Low | location |
| RateLimit | limit-rps | Low | location |
| Redirect | from-to-www-redirect | Low | location |
| Redirect | permanent-redirect | Medium | location |
| Redirect | permanent-redirect-code | Low | location |
| Redirect | relative-redirects | Low | location |
| Redirect | temporal-redirect | Medium | location |
| Redirect | temporal-redirect-code | Low | location |
| Rewrite | app-root | Medium | location |
| Rewrite | force-ssl-redirect | Medium | location |
| Rewrite | preserve-trailing-slash | Medium | location |
| Rewrite | rewrite-target | Medium | ingress |
| Rewrite | ssl-redirect | Low | location |
| Rewrite | use-regex | Low | location |
| SSLCipher | ssl-ciphers | Low | ingress |
| SSLCipher | ssl-prefer-server-ciphers | Low | ingress |
| SSLPassthrough | ssl-passthrough | Low | ingress |
| Satisfy | satisfy | Low | location |
| ServerSnippet | server-snippet | Critical | ingress |
| ServiceUpstream | service-upstream | Low | ingress |
| SessionAffinity | affinity | Low | ingress |
| SessionAffinity | affinity-canary-behavior | Low | ingress |
| SessionAffinity | affinity-mode | Medium | ingress |
| SessionAffinity | session-cookie-change-on-failure | Low | ingress |
| SessionAffinity | session-cookie-conditional-samesite-none | Low | ingress |
| SessionAffinity | session-cookie-domain | Medium | ingress |
| SessionAffinity | session-cookie-expires | Medium | ingress |
| SessionAffinity | session-cookie-max-age | Medium | ingress |
| SessionAffinity | session-cookie-name | Medium | ingress |
| SessionAffinity | session-cookie-path | Medium | ingress |
| SessionAffinity | session-cookie-samesite | Low | ingress |
| SessionAffinity | session-cookie-secure | Low | ingress |
| StreamSnippet | stream-snippet | Critical | ingress |
| UpstreamHashBy | upstream-hash-by | High | location |
| UpstreamHashBy | upstream-hash-by-subset | Low | location |
| UpstreamHashBy | upstream-hash-by-subset-size | Low | location |
| UpstreamVhost | upstream-vhost | Low | location |
| UsePortInRedirects | use-port-in-redirects | Low | location |
| XForwardedPrefix | x-forwarded-prefix | Medium | location |
| ingress nginx | Annotations Scope and Risk Group Annotation Risk Scope Aliases server alias High ingress Allowlist allowlist source range Medium location BackendProtocol backend protocol Low location BasicDigestAuth auth realm Medium location BasicDigestAuth auth secret Medium location BasicDigestAuth auth secret type Low location BasicDigestAuth auth type Low location Canary canary Low ingress Canary canary by cookie Medium ingress Canary canary by header Medium ingress Canary canary by header pattern Medium ingress Canary canary by header value Medium ingress Canary canary weight Low ingress Canary canary weight total Low ingress CertificateAuth auth tls error page High location CertificateAuth auth tls match cn High location CertificateAuth auth tls pass certificate to upstream Low location CertificateAuth auth tls secret Medium location CertificateAuth auth tls verify client Medium location CertificateAuth auth tls verify depth Low location ClientBodyBufferSize client body buffer size Low location ConfigurationSnippet configuration snippet Critical location Connection connection proxy header Low location CorsConfig cors allow credentials Low ingress CorsConfig cors allow headers Medium ingress CorsConfig cors allow methods Medium ingress CorsConfig cors allow origin Medium ingress CorsConfig cors expose headers Medium ingress CorsConfig cors max age Low ingress CorsConfig enable cors Low ingress CustomHTTPErrors custom http errors Low location CustomHeaders custom headers Medium location DefaultBackend default backend Low location Denylist denylist source range Medium location DisableProxyInterceptErrors disable proxy intercept errors Low location EnableGlobalAuth enable global auth Low location ExternalAuth auth always set cookie Low location ExternalAuth auth cache duration Medium location ExternalAuth auth cache key Medium location ExternalAuth auth keepalive Low location ExternalAuth auth keepalive requests Low location ExternalAuth auth keepalive share vars Low location ExternalAuth auth keepalive timeout Low location ExternalAuth auth method Low location ExternalAuth auth proxy set headers Medium location ExternalAuth auth request redirect Medium location ExternalAuth auth response headers Medium location ExternalAuth auth signin High location ExternalAuth auth signin redirect param Medium location ExternalAuth auth snippet Critical location ExternalAuth auth url High location FastCGI fastcgi index Medium location FastCGI fastcgi params configmap Medium location HTTP2PushPreload http2 push preload Low location LoadBalancing load balance Low location Logs enable access log Low location Logs enable rewrite log Low location Mirror mirror host High ingress Mirror mirror request body Low ingress Mirror mirror target High ingress ModSecurity enable modsecurity Low ingress ModSecurity enable owasp core rules Low ingress ModSecurity modsecurity snippet Critical ingress ModSecurity modsecurity transaction id High ingress Opentelemetry enable opentelemetry Low location Opentelemetry opentelemetry operation name Medium location Opentelemetry opentelemetry trust incoming span Low location Proxy proxy body size Medium location Proxy proxy buffer size Low location Proxy proxy buffering Low location Proxy proxy buffers number Low location Proxy proxy busy buffers size Low location Proxy proxy connect timeout Low location Proxy proxy cookie domain Medium location Proxy proxy cookie path Medium location Proxy proxy http version Low location Proxy proxy max temp file size Low location Proxy proxy next upstream Medium location Proxy proxy next upstream timeout Low location Proxy proxy next upstream tries Low location Proxy proxy read timeout Low location Proxy proxy redirect from Medium location Proxy proxy redirect to Medium location Proxy proxy request buffering Low location Proxy proxy send timeout Low location ProxySSL proxy ssl ciphers Medium ingress ProxySSL proxy ssl name High ingress ProxySSL proxy ssl protocols Low ingress ProxySSL proxy ssl secret Medium ingress ProxySSL proxy ssl server name Low ingress ProxySSL proxy ssl verify Low ingress ProxySSL proxy ssl verify depth Low ingress RateLimit limit allowlist Low location RateLimit limit burst multiplier Low location RateLimit limit connections Low location RateLimit limit rate Low location RateLimit limit rate after Low location RateLimit limit rpm Low location RateLimit limit rps Low location Redirect from to www redirect Low location Redirect permanent redirect Medium location Redirect permanent redirect code Low location Redirect relative redirects Low location Redirect temporal redirect Medium location Redirect temporal redirect code Low location Rewrite app root Medium location Rewrite force ssl redirect Medium location Rewrite preserve trailing slash Medium location Rewrite rewrite target Medium ingress Rewrite ssl redirect Low location Rewrite use regex Low location SSLCipher ssl ciphers Low ingress SSLCipher ssl prefer server ciphers Low ingress SSLPassthrough ssl passthrough Low ingress Satisfy satisfy Low location ServerSnippet server snippet Critical ingress ServiceUpstream service upstream Low ingress SessionAffinity affinity Low ingress SessionAffinity affinity canary behavior Low ingress SessionAffinity affinity mode Medium ingress SessionAffinity session cookie change on failure Low ingress SessionAffinity session cookie conditional samesite none Low ingress SessionAffinity session cookie domain Medium ingress SessionAffinity session cookie expires Medium ingress SessionAffinity session cookie max age Medium ingress SessionAffinity session cookie name Medium ingress SessionAffinity session cookie path Medium ingress SessionAffinity session cookie samesite Low ingress SessionAffinity session cookie secure Low ingress StreamSnippet stream snippet Critical ingress UpstreamHashBy upstream hash by High location UpstreamHashBy upstream hash by subset Low location UpstreamHashBy upstream hash by subset size Low location UpstreamVhost upstream vhost Low location UsePortInRedirects use port in redirects Low location XForwardedPrefix x forwarded prefix Medium location |
ingress nginx Annotations note You can add these Kubernetes annotations to specific Ingress objects to customize their behavior Annotation keys and values can only be strings Other types such as boolean or numeric values must be quoted tip i e | # Annotations
You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.
!!! tip
Annotation keys and values can only be strings.
Other types, such as boolean or numeric values must be quoted,
i.e. `"true"`, `"false"`, `"100"`.
!!! note
The annotation prefix can be changed using the
[`--annotations-prefix` command line argument](../cli-arguments.md),
but the default is `nginx.ingress.kubernetes.io`, as described in the
table below.
|Name | type |
|---------------------------|------|
|[nginx.ingress.kubernetes.io/app-root](#rewrite)|string|
|[nginx.ingress.kubernetes.io/affinity](#session-affinity)|cookie|
|[nginx.ingress.kubernetes.io/affinity-mode](#session-affinity)|"balanced" or "persistent"|
|[nginx.ingress.kubernetes.io/affinity-canary-behavior](#session-affinity)|"sticky" or "legacy"|
|[nginx.ingress.kubernetes.io/auth-realm](#authentication)|string|
|[nginx.ingress.kubernetes.io/auth-secret](#authentication)|string|
|[nginx.ingress.kubernetes.io/auth-secret-type](#authentication)|string|
|[nginx.ingress.kubernetes.io/auth-type](#authentication)|"basic" or "digest"|
|[nginx.ingress.kubernetes.io/auth-tls-secret](#client-certificate-authentication)|string|
|[nginx.ingress.kubernetes.io/auth-tls-verify-depth](#client-certificate-authentication)|number|
|[nginx.ingress.kubernetes.io/auth-tls-verify-client](#client-certificate-authentication)|string|
|[nginx.ingress.kubernetes.io/auth-tls-error-page](#client-certificate-authentication)|string|
|[nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream](#client-certificate-authentication)|"true" or "false"|
|[nginx.ingress.kubernetes.io/auth-tls-match-cn](#client-certificate-authentication)|string|
|[nginx.ingress.kubernetes.io/auth-url](#external-authentication)|string|
|[nginx.ingress.kubernetes.io/auth-cache-key](#external-authentication)|string|
|[nginx.ingress.kubernetes.io/auth-cache-duration](#external-authentication)|string|
|[nginx.ingress.kubernetes.io/auth-keepalive](#external-authentication)|number|
|[nginx.ingress.kubernetes.io/auth-keepalive-share-vars](#external-authentication)|"true" or "false"|
|[nginx.ingress.kubernetes.io/auth-keepalive-requests](#external-authentication)|number|
|[nginx.ingress.kubernetes.io/auth-keepalive-timeout](#external-authentication)|number|
|[nginx.ingress.kubernetes.io/auth-proxy-set-headers](#external-authentication)|string|
|[nginx.ingress.kubernetes.io/auth-snippet](#external-authentication)|string|
|[nginx.ingress.kubernetes.io/enable-global-auth](#external-authentication)|"true" or "false"|
|[nginx.ingress.kubernetes.io/backend-protocol](#backend-protocol)|string|
|[nginx.ingress.kubernetes.io/canary](#canary)|"true" or "false"|
|[nginx.ingress.kubernetes.io/canary-by-header](#canary)|string|
|[nginx.ingress.kubernetes.io/canary-by-header-value](#canary)|string|
|[nginx.ingress.kubernetes.io/canary-by-header-pattern](#canary)|string|
|[nginx.ingress.kubernetes.io/canary-by-cookie](#canary)|string|
|[nginx.ingress.kubernetes.io/canary-weight](#canary)|number|
|[nginx.ingress.kubernetes.io/canary-weight-total](#canary)|number|
|[nginx.ingress.kubernetes.io/client-body-buffer-size](#client-body-buffer-size)|string|
|[nginx.ingress.kubernetes.io/configuration-snippet](#configuration-snippet)|string|
|[nginx.ingress.kubernetes.io/custom-http-errors](#custom-http-errors)|[]int|
|[nginx.ingress.kubernetes.io/custom-headers](#custom-headers)|string|
|[nginx.ingress.kubernetes.io/default-backend](#default-backend)|string|
|[nginx.ingress.kubernetes.io/enable-cors](#enable-cors)|"true" or "false"|
|[nginx.ingress.kubernetes.io/cors-allow-origin](#enable-cors)|string|
|[nginx.ingress.kubernetes.io/cors-allow-methods](#enable-cors)|string|
|[nginx.ingress.kubernetes.io/cors-allow-headers](#enable-cors)|string|
|[nginx.ingress.kubernetes.io/cors-expose-headers](#enable-cors)|string|
|[nginx.ingress.kubernetes.io/cors-allow-credentials](#enable-cors)|"true" or "false"|
|[nginx.ingress.kubernetes.io/cors-max-age](#enable-cors)|number|
|[nginx.ingress.kubernetes.io/force-ssl-redirect](#server-side-https-enforcement-through-redirect)|"true" or "false"|
|[nginx.ingress.kubernetes.io/from-to-www-redirect](#redirect-fromto-www)|"true" or "false"|
|[nginx.ingress.kubernetes.io/http2-push-preload](#http2-push-preload)|"true" or "false"|
|[nginx.ingress.kubernetes.io/limit-connections](#rate-limiting)|number|
|[nginx.ingress.kubernetes.io/limit-rps](#rate-limiting)|number|
|[nginx.ingress.kubernetes.io/permanent-redirect](#permanent-redirect)|string|
|[nginx.ingress.kubernetes.io/permanent-redirect-code](#permanent-redirect-code)|number|
|[nginx.ingress.kubernetes.io/temporal-redirect](#temporal-redirect)|string|
|[nginx.ingress.kubernetes.io/temporal-redirect-code](#temporal-redirect-code)|number|
|[nginx.ingress.kubernetes.io/preserve-trailing-slash](#server-side-https-enforcement-through-redirect)|"true" or "false"|
|[nginx.ingress.kubernetes.io/proxy-body-size](#custom-max-body-size)|string|
|[nginx.ingress.kubernetes.io/proxy-cookie-domain](#proxy-cookie-domain)|string|
|[nginx.ingress.kubernetes.io/proxy-cookie-path](#proxy-cookie-path)|string|
|[nginx.ingress.kubernetes.io/proxy-connect-timeout](#custom-timeouts)|number|
|[nginx.ingress.kubernetes.io/proxy-send-timeout](#custom-timeouts)|number|
|[nginx.ingress.kubernetes.io/proxy-read-timeout](#custom-timeouts)|number|
|[nginx.ingress.kubernetes.io/proxy-next-upstream](#custom-timeouts)|string|
|[nginx.ingress.kubernetes.io/proxy-next-upstream-timeout](#custom-timeouts)|number|
|[nginx.ingress.kubernetes.io/proxy-next-upstream-tries](#custom-timeouts)|number|
|[nginx.ingress.kubernetes.io/proxy-request-buffering](#custom-timeouts)|string|
|[nginx.ingress.kubernetes.io/proxy-redirect-from](#proxy-redirect)|string|
|[nginx.ingress.kubernetes.io/proxy-redirect-to](#proxy-redirect)|string|
|[nginx.ingress.kubernetes.io/proxy-http-version](#proxy-http-version)|"1.0" or "1.1"|
|[nginx.ingress.kubernetes.io/proxy-ssl-secret](#backend-certificate-authentication)|string|
|[nginx.ingress.kubernetes.io/proxy-ssl-ciphers](#backend-certificate-authentication)|string|
|[nginx.ingress.kubernetes.io/proxy-ssl-name](#backend-certificate-authentication)|string|
|[nginx.ingress.kubernetes.io/proxy-ssl-protocols](#backend-certificate-authentication)|string|
|[nginx.ingress.kubernetes.io/proxy-ssl-verify](#backend-certificate-authentication)|string|
|[nginx.ingress.kubernetes.io/proxy-ssl-verify-depth](#backend-certificate-authentication)|number|
|[nginx.ingress.kubernetes.io/proxy-ssl-server-name](#backend-certificate-authentication)|string|
|[nginx.ingress.kubernetes.io/enable-rewrite-log](#enable-rewrite-log)|"true" or "false"|
|[nginx.ingress.kubernetes.io/rewrite-target](#rewrite)|URI|
|[nginx.ingress.kubernetes.io/satisfy](#satisfy)|string|
|[nginx.ingress.kubernetes.io/server-alias](#server-alias)|string|
|[nginx.ingress.kubernetes.io/server-snippet](#server-snippet)|string|
|[nginx.ingress.kubernetes.io/service-upstream](#service-upstream)|"true" or "false"|
|[nginx.ingress.kubernetes.io/session-cookie-change-on-failure](#cookie-affinity)|"true" or "false"|
|[nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none](#cookie-affinity)|"true" or "false"|
|[nginx.ingress.kubernetes.io/session-cookie-domain](#cookie-affinity)|string|
|[nginx.ingress.kubernetes.io/session-cookie-expires](#cookie-affinity)|string|
|[nginx.ingress.kubernetes.io/session-cookie-max-age](#cookie-affinity)|string|
|[nginx.ingress.kubernetes.io/session-cookie-name](#cookie-affinity)|string|default "INGRESSCOOKIE"|
|[nginx.ingress.kubernetes.io/session-cookie-path](#cookie-affinity)|string|
|[nginx.ingress.kubernetes.io/session-cookie-samesite](#cookie-affinity)|string|"None", "Lax" or "Strict"|
|[nginx.ingress.kubernetes.io/session-cookie-secure](#cookie-affinity)|string|
|[nginx.ingress.kubernetes.io/ssl-redirect](#server-side-https-enforcement-through-redirect)|"true" or "false"|
|[nginx.ingress.kubernetes.io/ssl-passthrough](#ssl-passthrough)|"true" or "false"|
|[nginx.ingress.kubernetes.io/stream-snippet](#stream-snippet)|string|
|[nginx.ingress.kubernetes.io/upstream-hash-by](#custom-nginx-upstream-hashing)|string|
|[nginx.ingress.kubernetes.io/x-forwarded-prefix](#x-forwarded-prefix-header)|string|
|[nginx.ingress.kubernetes.io/load-balance](#custom-nginx-load-balancing)|string|
|[nginx.ingress.kubernetes.io/upstream-vhost](#custom-nginx-upstream-vhost)|string|
|[nginx.ingress.kubernetes.io/denylist-source-range](#denylist-source-range)|CIDR|
|[nginx.ingress.kubernetes.io/whitelist-source-range](#whitelist-source-range)|CIDR|
|[nginx.ingress.kubernetes.io/proxy-buffering](#proxy-buffering)|string|
|[nginx.ingress.kubernetes.io/proxy-buffers-number](#proxy-buffers-number)|number|
|[nginx.ingress.kubernetes.io/proxy-buffer-size](#proxy-buffer-size)|string|
|[nginx.ingress.kubernetes.io/proxy-busy-buffers-size](#proxy-busy-buffers-size)|string|
|[nginx.ingress.kubernetes.io/proxy-max-temp-file-size](#proxy-max-temp-file-size)|string|
|[nginx.ingress.kubernetes.io/ssl-ciphers](#ssl-ciphers)|string|
|[nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers](#ssl-ciphers)|"true" or "false"|
|[nginx.ingress.kubernetes.io/connection-proxy-header](#connection-proxy-header)|string|
|[nginx.ingress.kubernetes.io/enable-access-log](#enable-access-log)|"true" or "false"|
|[nginx.ingress.kubernetes.io/enable-opentelemetry](#enable-opentelemetry)|"true" or "false"|
|[nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span](#opentelemetry-trust-incoming-spans)|"true" or "false"|
|[nginx.ingress.kubernetes.io/use-regex](#use-regex)|bool|
|[nginx.ingress.kubernetes.io/enable-modsecurity](#modsecurity)|bool|
|[nginx.ingress.kubernetes.io/enable-owasp-core-rules](#modsecurity)|bool|
|[nginx.ingress.kubernetes.io/modsecurity-transaction-id](#modsecurity)|string|
|[nginx.ingress.kubernetes.io/modsecurity-snippet](#modsecurity)|string|
|[nginx.ingress.kubernetes.io/mirror-request-body](#mirror)|string|
|[nginx.ingress.kubernetes.io/mirror-target](#mirror)|string|
|[nginx.ingress.kubernetes.io/mirror-host](#mirror)|string|
### Canary
In some cases, you may want to "canary" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after `nginx.ingress.kubernetes.io/canary: "true"` is set:
* `nginx.ingress.kubernetes.io/canary-by-header`: The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to `always`, it will be routed to the canary. When the header is set to `never`, it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence.
* `nginx.ingress.kubernetes.io/canary-by-header-value`: The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with `nginx.ingress.kubernetes.io/canary-by-header`. The annotation is an extension of the `nginx.ingress.kubernetes.io/canary-by-header` to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the `nginx.ingress.kubernetes.io/canary-by-header` annotation is not defined.
* `nginx.ingress.kubernetes.io/canary-by-header-pattern`: This works the same way as `canary-by-header-value` except it does PCRE Regex matching. Note that when `canary-by-header-value` is set this annotation will be ignored. When the given Regex causes error during request processing, the request will be considered as not matching.
* `nginx.ingress.kubernetes.io/canary-by-cookie`: The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to `always`, it will be routed to the canary. When the cookie is set to `never`, it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence.
* `nginx.ingress.kubernetes.io/canary-weight`: The integer based (0 - <weight-total>) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of `<weight-total>` means implies all requests will be sent to the alternative service specified in the Ingress. `<weight-total>` defaults to 100, and can be increased via `nginx.ingress.kubernetes.io/canary-weight-total`.
* `nginx.ingress.kubernetes.io/canary-weight-total`: The total weight of traffic. If unspecified, it defaults to 100.
Canary rules are evaluated in order of precedence. Precedence is as follows:
`canary-by-header -> canary-by-cookie -> canary-weight`
**Note** that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except `nginx.ingress.kubernetes.io/load-balance`, `nginx.ingress.kubernetes.io/upstream-hash-by`, and [annotations related to session affinity](#session-affinity). If you want to restore the original behavior of canaries when session affinity was ignored, set `nginx.ingress.kubernetes.io/affinity-canary-behavior` annotation with value `legacy` on the canary ingress definition.
**Known Limitations**
Currently a maximum of one canary ingress can be applied per Ingress rule.
### Rewrite
In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404.
Set the annotation `nginx.ingress.kubernetes.io/rewrite-target` to the path expected by the service.
If the Application Root is exposed in a different path and needs to be redirected, set the annotation `nginx.ingress.kubernetes.io/app-root` to redirect requests for `/`.
!!! example
Please check the [rewrite](../../examples/rewrite/README.md) example.
### Session Affinity
The annotation `nginx.ingress.kubernetes.io/affinity` enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server.
The only affinity type available for NGINX is `cookie`.
The annotation `nginx.ingress.kubernetes.io/affinity-mode` defines the stickiness of a session. Setting this to `balanced` (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to `persistent` will not rebalance sessions to new servers, therefore providing maximum stickiness.
The annotation `nginx.ingress.kubernetes.io/affinity-canary-behavior` defines the behavior of canaries when session affinity is enabled. Setting this to `sticky` (default) will ensure that users that were served by canaries, will continue to be served by canaries. Setting this to `legacy` will restore original canary behavior, when session affinity was ignored.
!!! attention
If more than one Ingress is defined for a host and at least one Ingress uses `nginx.ingress.kubernetes.io/affinity: cookie`, then only paths on the Ingress using `nginx.ingress.kubernetes.io/affinity` will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server.
!!! example
Please check the [affinity](../../examples/affinity/cookie/README.md) example.
#### Cookie affinity
If you use the ``cookie`` affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation `nginx.ingress.kubernetes.io/session-cookie-name`. The default is to create a cookie named 'INGRESSCOOKIE'.
The NGINX annotation `nginx.ingress.kubernetes.io/session-cookie-path` defines the path that will be set on the cookie. This is optional unless the annotation `nginx.ingress.kubernetes.io/use-regex` is set to true; Session cookie paths do not support regex.
Use `nginx.ingress.kubernetes.io/session-cookie-domain` to set the `Domain` attribute of the sticky cookie.
Use `nginx.ingress.kubernetes.io/session-cookie-samesite` to apply a `SameSite` attribute to the sticky cookie. Browser accepted values are `None`, `Lax`, and `Strict`. Some browsers reject cookies with `SameSite=None`, including those created before the `SameSite=None` specification (e.g. Chrome 5X). Other browsers mistakenly treat `SameSite=None` cookies as `SameSite=Strict` (e.g. Safari running on OSX 14). To omit `SameSite=None` from browsers with these incompatibilities, add the annotation `nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none: "true"`.
Use `nginx.ingress.kubernetes.io/session-cookie-expires` to control the cookie expires, its value is a number of seconds until the cookie expires.
Use `nginx.ingress.kubernetes.io/session-cookie-path` to control the cookie path when use-regex is set to true.
Use `nginx.ingress.kubernetes.io/session-cookie-change-on-failure` to control the cookie change after request failure.
### Authentication
It is possible to add authentication by adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords.
The annotations are:
```
nginx.ingress.kubernetes.io/auth-type: [basic|digest]
```
Indicates the [HTTP Authentication Type: Basic or Digest Access Authentication](https://tools.ietf.org/html/rfc2617).
```
nginx.ingress.kubernetes.io/auth-secret: secretName
```
The name of the Secret that contains the usernames and passwords which are granted access to the `path`s defined in the Ingress rules.
This annotation also accepts the alternative form "namespace/secretName", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.
```
nginx.ingress.kubernetes.io/auth-secret-type: [auth-file|auth-map]
```
The `auth-secret` can have two forms:
- `auth-file` - default, an htpasswd file in the key `auth` within the secret
- `auth-map` - the keys of the secret are the usernames, and the values are the hashed passwords
```
nginx.ingress.kubernetes.io/auth-realm: "realm string"
```
!!! example
Please check the [auth](../../examples/auth/basic/README.md) example.
### Custom NGINX upstream hashing
NGINX supports load balancing by client-server mapping based on [consistent hashing](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash) for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The [ketama](https://www.last.fm/user/RJ/journal/2007/04/10/rz_libketama_-_a_consistent_hashing_algo_for_memcache_clients) consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.
There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution.
To enable consistent hashing for a backend:
`nginx.ingress.kubernetes.io/upstream-hash-by`: the nginx variable, text value or any combination thereof to use for consistent hashing. For example: `nginx.ingress.kubernetes.io/upstream-hash-by: "$request_uri"` or `nginx.ingress.kubernetes.io/upstream-hash-by: "$request_uri$host"` or `nginx.ingress.kubernetes.io/upstream-hash-by: "${request_uri}-text-value"` to consistently hash upstream requests by the current request URI.
"subset" hashing can be enabled setting `nginx.ingress.kubernetes.io/upstream-hash-by-subset`: "true". This maps requests to subset of nodes instead of a single one. `nginx.ingress.kubernetes.io/upstream-hash-by-subset-size` determines the size of each subset (default 3).
Please check the [chashsubset](../../examples/chashsubset/deployment.yaml) example.
### Custom NGINX load balancing
This is similar to [`load-balance` in ConfigMap](./configmap.md#load-balance), but configures load balancing algorithm per ingress.
>Note that `nginx.ingress.kubernetes.io/upstream-hash-by` takes preference over this. If this and `nginx.ingress.kubernetes.io/upstream-hash-by` are not set then we fallback to using globally configured load balancing algorithm.
### Custom NGINX upstream vhost
This configuration setting allows you to control the value for host in the following statement: `proxy_set_header Host $host`, which forms part of the location block. This is useful if you need to call the upstream server by something other than `$host`.
### Client Certificate Authentication
It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.
Client Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths.
To enable, add the annotation `nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName`. This secret must have a file named `ca.crt` containing the full Certificate Authority chain `ca.crt` that is enabled to authenticate against this Ingress.
You can further customize client certificate authentication and behavior with these annotations:
* `nginx.ingress.kubernetes.io/auth-tls-verify-depth`: The validation depth between the provided client certificate and the Certification Authority chain. (default: 1)
* `nginx.ingress.kubernetes.io/auth-tls-verify-client`: Enables verification of client certificates. Possible values are:
* `on`: Request a client certificate that must be signed by a certificate that is included in the secret key `ca.crt` of the secret specified by `nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName`. Failed certificate verification will result in a status code 400 (Bad Request) (default)
* `off`: Don't request client certificates and don't do client certificate verification.
* `optional`: Do optional client certificate validation against the CAs from `auth-tls-secret`. The request fails with status code 400 (Bad Request) when a certificate is provided that is not signed by the CA. When no or an otherwise invalid certificate is provided, the request does not fail, but instead the verification result is sent to the upstream service.
* `optional_no_ca`: Do optional client certificate validation, but do not fail the request when the client certificate is not signed by the CAs from `auth-tls-secret`. Certificate verification result is sent to the upstream service.
* `nginx.ingress.kubernetes.io/auth-tls-error-page`: The URL/Page that user should be redirected in case of a Certificate Authentication Error
* `nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream`: Indicates if the received certificates should be passed or not to the upstream server in the header `ssl-client-cert`. Possible values are "true" or "false" (default).
* `nginx.ingress.kubernetes.io/auth-tls-match-cn`: Adds a sanity check for the CN of the client certificate that is sent over using a string / regex starting with "CN=", example: `"CN=myvalidclient"`. If the certificate CN sent during mTLS does not match your string / regex it will fail with status code 403. Another way of using this is by adding multiple options in your regex, example: `"CN=(option1|option2|myvalidclient)"`. In this case, as long as one of the options in the brackets matches the certificate CN then you will receive a 200 status code.
The following headers are sent to the upstream service according to the `auth-tls-*` annotations:
* `ssl-client-issuer-dn`: The issuer information of the client certificate. Example: "CN=My CA"
* `ssl-client-subject-dn`: The subject information of the client certificate. Example: "CN=My Client"
* `ssl-client-verify`: The result of the client verification. Possible values: "SUCCESS", "FAILED: <description, why the verification failed>"
* `ssl-client-cert`: The full client certificate in PEM format. Will only be sent when `nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream` is set to "true". Example: `-----BEGIN%20CERTIFICATE-----%0A...---END%20CERTIFICATE-----%0A`
!!! example
Please check the [client-certs](../../examples/auth/client-certs/README.md) example.
!!! attention
TLS with Client Authentication is **not** possible in Cloudflare and might result in unexpected behavior.
Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: [https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/](https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/)
Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: [https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls](https://web.archive.org/web/20200907143649/https://support.cloudflare.com/hc/en-us/articles/204899617-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls#section5)
### Backend Certificate Authentication
It is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule.
* `nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName`:
Specifies a Secret with the certificate `tls.crt`, key `tls.key` in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates `ca.crt` in PEM format used to verify the certificate of the proxied HTTPS server.
This annotation expects the Secret name in the form "namespace/secretName".
* `nginx.ingress.kubernetes.io/proxy-ssl-verify`:
Enables or disables verification of the proxied HTTPS server certificate. (default: off)
* `nginx.ingress.kubernetes.io/proxy-ssl-verify-depth`:
Sets the verification depth in the proxied HTTPS server certificates chain. (default: 1)
* `nginx.ingress.kubernetes.io/proxy-ssl-ciphers`:
Specifies the enabled [ciphers](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_ciphers) for requests to a proxied HTTPS server. The ciphers are specified in the format understood by the OpenSSL library.
* `nginx.ingress.kubernetes.io/proxy-ssl-name`:
Allows to set [proxy_ssl_name](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_name). This allows overriding the server name used to verify the certificate of the proxied HTTPS server. This value is also passed through SNI when a connection is established to the proxied HTTPS server.
* `nginx.ingress.kubernetes.io/proxy-ssl-protocols`:
Enables the specified [protocols](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_protocols) for requests to a proxied HTTPS server.
* `nginx.ingress.kubernetes.io/proxy-ssl-server-name`:
Enables passing of the server name through TLS Server Name Indication extension (SNI, RFC 6066) when establishing a connection with the proxied HTTPS server.
### Configuration snippet
Using this annotation you can add additional configuration to the NGINX location. For example:
```yaml
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Request-Id: $req_id";
```
Be aware this can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. The recommended mitigation for this threat is to disable this feature, so it may not work for you. See CVE-2021-25742 and the [related issue on github](https://github.com/kubernetes/ingress-nginx/issues/7837) for more information.
### Custom HTTP Errors
Like the [`custom-http-errors`](./configmap.md#custom-http-errors) value in the ConfigMap, this annotation will set NGINX `proxy-intercept-errors`, but only for the NGINX location associated with this ingress. If a [default backend annotation](#default-backend) is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend).
Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress).
If `custom-http-errors` is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path.
Example usage:
```
nginx.ingress.kubernetes.io/custom-http-errors: "404,415"
```
### Custom Headers
This annotation is of the form `nginx.ingress.kubernetes.io/custom-headers: <namespace>/<custom headers configmap>` to specify a namespace and configmap name that contains custom headers. This annotation uses `more_set_headers` nginx directive.
Example annotation for following example configmap:
```yaml
nginx.ingress.kubernetes.io/custom-headers: default/custom-headers-configmap
```
Example configmap:
```yaml
apiVersion: v1
data:
Content-Type: application/json
kind: ConfigMap
metadata:
name: custom-headers-configmap
namespace: default
```
!!! attention
First define the allowed response headers in [global-allowed-response-headers](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#global-allowed-response-headers).
### Default Backend
This annotation is of the form `nginx.ingress.kubernetes.io/default-backend: <svc name>` to specify a custom default backend. This `<svc name>` is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. In case the service has [multiple ports](https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services), the first one is the one which will receive the backend traffic.
This service will be used to handle the response when the configured service in the Ingress rule does not have any active endpoints. It will also be used to handle the error responses if both this annotation and the [custom-http-errors annotation](#custom-http-errors) are set.
### Enable CORS
To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation
`nginx.ingress.kubernetes.io/enable-cors: "true"`. This will add a section in the server
location enabling this functionality.
CORS can be controlled with the following annotations:
* `nginx.ingress.kubernetes.io/cors-allow-methods`: Controls which methods are accepted.
This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).
- Default: `GET, PUT, POST, DELETE, PATCH, OPTIONS`
- Example: `nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"`
* `nginx.ingress.kubernetes.io/cors-allow-headers`: Controls which headers are accepted.
This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.
- Default: `DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization`
- Example: `nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-app123-XPTO"`
* `nginx.ingress.kubernetes.io/cors-expose-headers`: Controls which headers are exposed to response.
This is a multi-valued field, separated by ',' and accepts letters, numbers, _, - and *.
- Default: *empty*
- Example: `nginx.ingress.kubernetes.io/cors-expose-headers: "*, X-CustomResponseHeader"`
* `nginx.ingress.kubernetes.io/cors-allow-origin`: Controls what's the accepted Origin for CORS.
This is a multi-valued field, separated by ','. It must follow this format: `protocol://origin-site.com` or `protocol://origin-site.com:port`
- Default: `*`
- Example: `nginx.ingress.kubernetes.io/cors-allow-origin: "https://origin-site.com:4443, http://origin-site.com, myprotocol://example.org:1199"`
It also supports single level wildcard subdomains and follows this format: `protocol://*.foo.bar`, `protocol://*.bar.foo:8080` or `protocol://*.abc.bar.foo:9000`
- Example: `nginx.ingress.kubernetes.io/cors-allow-origin: "https://*.origin-site.com:4443, http://*.origin-site.com, myprotocol://example.org:1199"`
* `nginx.ingress.kubernetes.io/cors-allow-credentials`: Controls if credentials can be passed during CORS operations.
- Default: `true`
- Example: `nginx.ingress.kubernetes.io/cors-allow-credentials: "false"`
* `nginx.ingress.kubernetes.io/cors-max-age`: Controls how long preflight requests can be cached.
- Default: `1728000`
- Example: `nginx.ingress.kubernetes.io/cors-max-age: 600`
!!! note
For more information please see [https://enable-cors.org](https://enable-cors.org/server_nginx.html)
### HTTP2 Push Preload.
Enables automatic conversion of preload links specified in the “Link” response header fields into push requests.
!!! example
* `nginx.ingress.kubernetes.io/http2-push-preload: "true"`
### Server Alias
Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation `nginx.ingress.kubernetes.io/server-alias: "<alias 1>,<alias 2>"`.
This will create a server with the same configuration, but adding new values to the `server_name` directive.
!!! note
A server-alias name cannot conflict with the hostname of an existing server. If it does, the server-alias annotation will be ignored.
If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take
place over the alias configuration.
For more information please see [the `server_name` documentation](https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name).
### Server snippet
Using the annotation `nginx.ingress.kubernetes.io/server-snippet` it is possible to add custom configuration in the server configuration block.
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
set $agentflag 0;
if ($http_user_agent ~* "(Mobile)" ){
set $agentflag 1;
}
if ( $agentflag = 1 ) {
return 301 https://m.example.com;
}
```
!!! attention
This annotation can be used only once per host.
### Client Body Buffer Size
Sets buffer size for reading client request body per location. In case the request body is larger than the buffer,
the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages.
This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is
applied to each location provided in the ingress rule.
!!! note
The annotation value must be given in a format understood by Nginx.
!!! example
* `nginx.ingress.kubernetes.io/client-body-buffer-size: "1000"` # 1000 bytes
* `nginx.ingress.kubernetes.io/client-body-buffer-size: 1k` # 1 kilobyte
* `nginx.ingress.kubernetes.io/client-body-buffer-size: 1K` # 1 kilobyte
* `nginx.ingress.kubernetes.io/client-body-buffer-size: 1m` # 1 megabyte
* `nginx.ingress.kubernetes.io/client-body-buffer-size: 1M` # 1 megabyte
For more information please see [https://nginx.org](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size)
### External Authentication
To use an existing service that provides authentication the Ingress rule can be annotated with `nginx.ingress.kubernetes.io/auth-url` to indicate the URL where the HTTP request should be sent.
```yaml
nginx.ingress.kubernetes.io/auth-url: "URL to the authentication service"
```
Additionally it is possible to set:
* `nginx.ingress.kubernetes.io/auth-keepalive`:
`<Connections>` to specify the maximum number of keepalive connections to `auth-url`. Only takes effect
when no variables are used in the host part of the URL. Defaults to `0` (keepalive disabled).
> Note: does not work with HTTP/2 listener because of a limitation in Lua [subrequests](https://github.com/openresty/lua-nginx-module#spdy-mode-not-fully-supported).
> [UseHTTP2](./configmap.md#use-http2) configuration should be disabled!
* `nginx.ingress.kubernetes.io/auth-keepalive-share-vars`:
Whether to share Nginx variables among the current request and the auth request. Example use case is to track requests: when set to "true" X-Request-ID HTTP header will be the same for the backend and the auth request.
Defaults to "false".
* `nginx.ingress.kubernetes.io/auth-keepalive-requests`:
`<Requests>` to specify the maximum number of requests that can be served through one keepalive connection.
Defaults to `1000` and only applied if `auth-keepalive` is set to higher than `0`.
* `nginx.ingress.kubernetes.io/auth-keepalive-timeout`:
`<Timeout>` to specify a duration in seconds which an idle keepalive connection to an upstream server will stay open.
Defaults to `60` and only applied if `auth-keepalive` is set to higher than `0`.
* `nginx.ingress.kubernetes.io/auth-method`:
`<Method>` to specify the HTTP method to use.
* `nginx.ingress.kubernetes.io/auth-signin`:
`<SignIn_URL>` to specify the location of the error page.
* `nginx.ingress.kubernetes.io/auth-signin-redirect-param`:
`<SignIn_URL>` to specify the URL parameter in the error page which should contain the original URL for a failed signin request.
* `nginx.ingress.kubernetes.io/auth-response-headers`:
`<Response_Header_1, ..., Response_Header_n>` to specify headers to pass to backend once authentication request completes.
* `nginx.ingress.kubernetes.io/auth-proxy-set-headers`:
`<ConfigMap>` the name of a ConfigMap that specifies headers to pass to the authentication service
* `nginx.ingress.kubernetes.io/auth-request-redirect`:
`<Request_Redirect_URL>` to specify the X-Auth-Request-Redirect header value.
* `nginx.ingress.kubernetes.io/auth-cache-key`:
`<Cache_Key>` this enables caching for auth requests. specify a lookup key for auth responses. e.g. `$remote_user$http_authorization`. Each server and location has it's own keyspace. Hence a cached response is only valid on a per-server and per-location basis.
* `nginx.ingress.kubernetes.io/auth-cache-duration`:
`<Cache_duration>` to specify a caching time for auth responses based on their response codes, e.g. `200 202 30m`. See [proxy_cache_valid](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_valid) for details. You may specify multiple, comma-separated values: `200 202 10m, 401 5m`. defaults to `200 202 401 5m`.
* `nginx.ingress.kubernetes.io/auth-always-set-cookie`:
`<Boolean_Flag>` to set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308.
* `nginx.ingress.kubernetes.io/auth-snippet`:
`<Auth_Snippet>` to specify a custom snippet to use with external authentication, e.g.
```yaml
nginx.ingress.kubernetes.io/auth-url: http://foo.com/external-auth
nginx.ingress.kubernetes.io/auth-snippet: |
proxy_set_header Foo-Header 42;
```
> Note: `nginx.ingress.kubernetes.io/auth-snippet` is an optional annotation. However, it may only be used in conjunction with `nginx.ingress.kubernetes.io/auth-url` and will be ignored if `nginx.ingress.kubernetes.io/auth-url` is not set
!!! example
Please check the [external-auth](../../examples/auth/external-auth/README.md) example.
#### Global External Authentication
By default the controller redirects all requests to an existing service that provides authentication if `global-auth-url` is set in the NGINX ConfigMap. If you want to disable this behavior for that ingress, you can use `enable-global-auth: "false"` in the NGINX ConfigMap.
`nginx.ingress.kubernetes.io/enable-global-auth`:
indicates if GlobalExternalAuth configuration should be applied or not to this Ingress rule. Default values is set to `"true"`.
!!! note
For more information please see [global-auth-url](./configmap.md#global-auth-url).
### Rate Limiting
These annotations define limits on connections and transmission rates. These can be used to mitigate [DDoS Attacks](https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus).
* `nginx.ingress.kubernetes.io/limit-connections`: number of concurrent connections allowed from a single IP address. A 503 error is returned when exceeding this limit.
* `nginx.ingress.kubernetes.io/limit-rps`: number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, [limit-req-status-code](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#limit-req-status-code) ***default:*** 503 is returned.
* `nginx.ingress.kubernetes.io/limit-rpm`: number of requests accepted from a given IP each minute. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, [limit-req-status-code](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#limit-req-status-code) ***default:*** 503 is returned.
* `nginx.ingress.kubernetes.io/limit-burst-multiplier`: multiplier of the limit rate for burst size. The default burst multiplier is 5, this annotation override the default multiplier. When clients exceed this limit, [limit-req-status-code](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#limit-req-status-code) ***default:*** 503 is returned.
* `nginx.ingress.kubernetes.io/limit-rate-after`: initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited. This feature must be used with [proxy-buffering](#proxy-buffering) enabled.
* `nginx.ingress.kubernetes.io/limit-rate`: number of kilobytes per second allowed to send to a given connection. The zero value disables rate limiting. This feature must be used with [proxy-buffering](#proxy-buffering) enabled.
* `nginx.ingress.kubernetes.io/limit-whitelist`: client IP source ranges to be excluded from rate-limiting. The value is a comma separated list of CIDRs.
If you specify multiple annotations in a single Ingress rule, limits are applied in the order `limit-connections`, `limit-rpm`, `limit-rps`.
To configure settings globally for all Ingress rules, the `limit-rate-after` and `limit-rate` values may be set in the [NGINX ConfigMap](./configmap.md#limit-rate). The value set in an Ingress annotation will override the global setting.
The client IP address will be set based on the use of [PROXY protocol](./configmap.md#use-proxy-protocol) or from the `X-Forwarded-For` header value when [use-forwarded-headers](./configmap.md#use-forwarded-headers) is enabled.
### Permanent Redirect
This annotation allows to return a permanent redirect (Return Code 301) instead of sending data to the upstream. For example `nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com` would redirect everything to Google.
### Permanent Redirect Code
This annotation allows you to modify the status code used for permanent redirects. For example `nginx.ingress.kubernetes.io/permanent-redirect-code: '308'` would return your permanent-redirect with a 308.
### Temporal Redirect
This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example `nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com` would redirect everything to Google with a Return Code of 302 (Moved Temporarily)
### Temporal Redirect Code
This annotation allows you to modify the status code used for temporal redirects. For example `nginx.ingress.kubernetes.io/temporal-redirect-code: '307'` would return your temporal-redirect with a 307.
### SSL Passthrough
The annotation `nginx.ingress.kubernetes.io/ssl-passthrough` instructs the controller to send TLS connections directly
to the backend instead of letting NGINX decrypt the communication. See also [TLS/HTTPS](../tls.md#ssl-passthrough) in
the User guide.
!!! note
SSL Passthrough is **disabled by default** and requires starting the controller with the
[`--enable-ssl-passthrough`](../cli-arguments.md) flag.
!!! attention
Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough
invalidates all the other annotations set on an Ingress object.
### Service Upstream
By default the Ingress-Nginx Controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.
The `nginx.ingress.kubernetes.io/service-upstream` annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.
This can be desirable for things like zero-downtime deployments . See issue [#257](https://github.com/kubernetes/ingress-nginx/issues/257).
#### Known Issues
If the `service-upstream` annotation is specified the following things should be taken into consideration:
* Sticky Sessions will not work as only round-robin load balancing is supported.
* The `proxy_next_upstream` directive will not have any effect meaning on error the request will not be dispatched to another upstream.
### Server-side HTTPS enforcement through redirect
By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress.
If you want to disable this behavior globally, you can use `ssl-redirect: "false"` in the NGINX [ConfigMap](./configmap.md#ssl-redirect).
To configure this feature for specific ingress resources, you can use the `nginx.ingress.kubernetes.io/ssl-redirect: "false"`
annotation in the particular resource.
When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS
even when there is no TLS certificate available.
This can be achieved by using the `nginx.ingress.kubernetes.io/force-ssl-redirect: "true"` annotation in the particular resource.
To preserve the trailing slash in the URI with `ssl-redirect`, set `nginx.ingress.kubernetes.io/preserve-trailing-slash: "true"` annotation for that particular resource.
### Redirect from/to www
In some scenarios, it is required to redirect from `www.domain.com` to `domain.com` or vice versa, which way the redirect is performed depends on the configured `host` value in the Ingress object.
For example, if `.spec.rules.host` is configured with a value like `www.example.com`, then this annotation will redirect from `example.com` to `www.example.com`. If `.spec.rules.host` is configured with a value like `example.com`, so without a `www`, then this annotation will redirect from `www.example.com` to `example.com` instead.
To enable this feature use the annotation `nginx.ingress.kubernetes.io/from-to-www-redirect: "true"`
!!! attention
If at some point a new Ingress is created with a host equal to one of the options (like `domain.com`) the annotation will be omitted.
!!! attention
For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate.
### Denylist source range
You can specify blocked client IP source ranges through the `nginx.ingress.kubernetes.io/denylist-source-range` annotation.
The value is a comma separated list of [CIDRs](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing), e.g. `10.0.0.0/24,172.10.0.1`.
To configure this setting globally for all Ingress rules, the `denylist-source-range` value may be set in the [NGINX ConfigMap](./configmap.md#denylist-source-range).
!!! note
Adding an annotation to an Ingress rule overrides any global restriction.
### Whitelist source range
You can specify allowed client IP source ranges through the `nginx.ingress.kubernetes.io/whitelist-source-range` annotation.
The value is a comma separated list of [CIDRs](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing), e.g. `10.0.0.0/24,172.10.0.1`.
To configure this setting globally for all Ingress rules, the `whitelist-source-range` value may be set in the [NGINX ConfigMap](./configmap.md#whitelist-source-range).
!!! note
Adding an annotation to an Ingress rule overrides any global restriction.
### Custom timeouts
Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers.
In some scenarios is required to have different values. To allow this we provide annotations that allows this customization:
- `nginx.ingress.kubernetes.io/proxy-connect-timeout`
- `nginx.ingress.kubernetes.io/proxy-send-timeout`
- `nginx.ingress.kubernetes.io/proxy-read-timeout`
- `nginx.ingress.kubernetes.io/proxy-next-upstream`
- `nginx.ingress.kubernetes.io/proxy-next-upstream-timeout`
- `nginx.ingress.kubernetes.io/proxy-next-upstream-tries`
- `nginx.ingress.kubernetes.io/proxy-request-buffering`
If you indicate [Backend Protocol](#backend-protocol) as `GRPC` or `GRPCS`, the following grpc values will be set and inherited from proxy timeouts:
- [`grpc_connect_timeout=5s`](https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_connect_timeout), from `nginx.ingress.kubernetes.io/proxy-connect-timeout`
- [`grpc_send_timeout=60s`](https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_send_timeout), from `nginx.ingress.kubernetes.io/proxy-send-timeout`
- [`grpc_read_timeout=60s`](https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_read_timeout), from `nginx.ingress.kubernetes.io/proxy-read-timeout`
Note: All timeout values are unitless and in seconds e.g. `nginx.ingress.kubernetes.io/proxy-read-timeout: "120"` sets a valid 120 seconds proxy read timeout.
### Proxy redirect
The annotations `nginx.ingress.kubernetes.io/proxy-redirect-from` and `nginx.ingress.kubernetes.io/proxy-redirect-to` will set the first and second parameters of NGINX's proxy_redirect directive respectively. It is possible to
set the text that should be changed in the `Location` and `Refresh` header fields of a [proxied server response](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect)
Setting "off" or "default" in the annotation `nginx.ingress.kubernetes.io/proxy-redirect-from` disables `nginx.ingress.kubernetes.io/proxy-redirect-to`,
otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces.
By default the value of each annotation is "off".
### Custom max body size
For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter [`client_max_body_size`](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size).
To configure this setting globally for all Ingress rules, the `proxy-body-size` value may be set in the [NGINX ConfigMap](./configmap.md#proxy-body-size).
To use custom values in an Ingress rule define these annotation:
```yaml
nginx.ingress.kubernetes.io/proxy-body-size: 8m
```
### Proxy cookie domain
Sets a text that [should be changed in the domain attribute](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_domain) of the "Set-Cookie" header fields of a proxied server response.
To configure this setting globally for all Ingress rules, the `proxy-cookie-domain` value may be set in the [NGINX ConfigMap](./configmap.md#proxy-cookie-domain).
### Proxy cookie path
Sets a text that [should be changed in the path attribute](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_path) of the "Set-Cookie" header fields of a proxied server response.
To configure this setting globally for all Ingress rules, the `proxy-cookie-path` value may be set in the [NGINX ConfigMap](./configmap.md#proxy-cookie-path).
### Proxy buffering
Enable or disable proxy buffering [`proxy_buffering`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering).
By default proxy buffering is disabled in the NGINX config.
To configure this setting globally for all Ingress rules, the `proxy-buffering` value may be set in the [NGINX ConfigMap](./configmap.md#proxy-buffering).
To use custom values in an Ingress rule define these annotation:
```yaml
nginx.ingress.kubernetes.io/proxy-buffering: "on"
```
### Proxy buffers Number
Sets the number of the buffers in [`proxy_buffers`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers) used for reading the first part of the response received from the proxied server.
By default proxy buffers number is set as 4
To configure this setting globally, set `proxy-buffers-number` in [NGINX ConfigMap](./configmap.md#proxy-buffers-number). To use custom values in an Ingress rule, define this annotation:
```yaml
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
```
### Proxy buffer size
Sets the size of the buffer [`proxy_buffer_size`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) used for reading the first part of the response received from the proxied server.
By default proxy buffer size is set as "4k"
To configure this setting globally, set `proxy-buffer-size` in [NGINX ConfigMap](./configmap.md#proxy-buffer-size). To use custom values in an Ingress rule, define this annotation:
```yaml
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
```
### Proxy busy buffers size
[Limits the total size of buffers that can be busy](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_busy_buffers_size) sending a response to the client while the response is not yet fully read.
By default proxy busy buffers size is set as "8k".
To configure this setting globally, set `proxy-busy-buffers-size` in the [ConfigMap](./configmap.md#proxy-busy-buffers-size). To use custom values in an Ingress rule, define this annotation:
```yaml
nginx.ingress.kubernetes.io/proxy-busy-buffers-size: "16k"
```
### Proxy max temp file size
When [`buffering`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering) of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the [`proxy_buffer_size`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) and [`proxy_buffers`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers) directives, a part of the response can be saved to a temporary file. This directive sets the maximum `size` of the temporary file setting the [`proxy_max_temp_file_size`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_max_temp_file_size). The size of data written to the temporary file at a time is set by the [`proxy_temp_file_write_size`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_temp_file_write_size) directive.
The zero value disables buffering of responses to temporary files.
To use custom values in an Ingress rule, define this annotation:
```yaml
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "1024m"
```
### Proxy HTTP version
Using this annotation sets the [`proxy_http_version`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version) that the Nginx reverse proxy will use to communicate with the backend.
By default this is set to "1.1".
```yaml
nginx.ingress.kubernetes.io/proxy-http-version: "1.0"
```
### SSL ciphers
Specifies the [enabled ciphers](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers).
Using this annotation will set the `ssl_ciphers` directive at the server level. This configuration is active for all the paths in the host.
```yaml
nginx.ingress.kubernetes.io/ssl-ciphers: "ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP"
```
The following annotation will set the `ssl_prefer_server_ciphers` directive at the server level. This configuration specifies that server ciphers should be preferred over client ciphers when using the SSLv3 and TLS protocols.
```yaml
nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers: "true"
```
### Connection proxy header
Using this annotation will override the default connection header set by NGINX.
To use custom values in an Ingress rule, define the annotation:
```yaml
nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive"
```
### Enable Access Log
Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given
ingress. To do this, use the annotation:
```yaml
nginx.ingress.kubernetes.io/enable-access-log: "false"
```
### Enable Rewrite Log
Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs.
Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:
```yaml
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
```
### Enable Opentelemetry
Opentelemetry can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden
to enable it or disable it for a specific ingress (e.g. to turn off telemetry of external health check endpoints)
```yaml
nginx.ingress.kubernetes.io/enable-opentelemetry: "true"
```
### Opentelemetry Trust Incoming Span
The option to trust incoming trace spans can be enabled or disabled globally through the ConfigMap but this will
sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. only enable on a private endpoint)
```yaml
nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-spans: "true"
```
### X-Forwarded-Prefix Header
To add the non-standard `X-Forwarded-Prefix` header to the upstream request with a string value, the following annotation can be used:
```yaml
nginx.ingress.kubernetes.io/x-forwarded-prefix: "/path"
```
### ModSecurity
[ModSecurity](http://modsecurity.org/) is an OpenSource Web Application firewall. It can be enabled for a particular set
of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the
[ConfigMap](./configmap.md#enable-modsecurity). Note this will enable ModSecurity for all paths, and each path
must be disabled manually.
It can be enabled using the following annotation:
```yaml
nginx.ingress.kubernetes.io/enable-modsecurity: "true"
```
ModSecurity will run in "Detection-Only" mode using the [recommended configuration](https://github.com/SpiderLabs/ModSecurity/blob/v3/master/modsecurity.conf-recommended).
You can enable the [OWASP Core Rule Set](https://www.modsecurity.org/CRS/Documentation/) by
setting the following annotation:
```yaml
nginx.ingress.kubernetes.io/enable-owasp-core-rules: "true"
```
You can pass transactionIDs from nginx by setting up the following:
```yaml
nginx.ingress.kubernetes.io/modsecurity-transaction-id: "$request_id"
```
You can also add your own set of modsecurity rules via a snippet:
```yaml
nginx.ingress.kubernetes.io/modsecurity-snippet: |
SecRuleEngine On
SecDebugLog /tmp/modsec_debug.log
```
Note: If you use both `enable-owasp-core-rules` and `modsecurity-snippet` annotations together, only the
`modsecurity-snippet` will take effect. If you wish to include the [OWASP Core Rule Set](https://www.modsecurity.org/CRS/Documentation/) or
[recommended configuration](https://github.com/SpiderLabs/ModSecurity/blob/v3/master/modsecurity.conf-recommended) simply use the include
statement:
nginx 0.24.1 and below
```yaml
nginx.ingress.kubernetes.io/modsecurity-snippet: |
Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
Include /etc/nginx/modsecurity/modsecurity.conf
```
nginx 0.25.0 and above
```yaml
nginx.ingress.kubernetes.io/modsecurity-snippet: |
Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
```
### Backend Protocol
Using `backend-protocol` annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces `secure-backends` in older versions)
Valid Values: HTTP, HTTPS, AUTO_HTTP, GRPC, GRPCS and FCGI
By default NGINX uses `HTTP`.
Example:
```yaml
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
```
### Use Regex
!!! attention
When using this annotation with the NGINX annotation `nginx.ingress.kubernetes.io/affinity` of type `cookie`, `nginx.ingress.kubernetes.io/session-cookie-path` must be also set; Session cookie paths do not support regex.
Using the `nginx.ingress.kubernetes.io/use-regex` annotation will indicate whether or not the paths defined on an Ingress use regular expressions. The default value is `false`.
The following will indicate that regular expression paths are being used:
```yaml
nginx.ingress.kubernetes.io/use-regex: "true"
```
The following will indicate that regular expression paths are __not__ being used:
```yaml
nginx.ingress.kubernetes.io/use-regex: "false"
```
When this annotation is set to `true`, the case insensitive regular expression [location modifier](https://nginx.org/en/docs/http/ngx_http_core_module.html#location) will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
Additionally, if the [`rewrite-target` annotation](#rewrite) is used on any Ingress for a given host, then the case insensitive regular expression [location modifier](https://nginx.org/en/docs/http/ngx_http_core_module.html#location) will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
Please read about [ingress path matching](../ingress-path-matching.md) before using this modifier.
### Satisfy
By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value.
```yaml
nginx.ingress.kubernetes.io/satisfy: "any"
```
### Mirror
Enables a request to be mirrored to a mirror backend. Responses by mirror backends are ignored. This feature is useful, to see how requests will react in "test" backends.
The mirror backend can be set by applying:
```yaml
nginx.ingress.kubernetes.io/mirror-target: https://test.env.com$request_uri
```
By default the request-body is sent to the mirror backend, but can be turned off by applying:
```yaml
nginx.ingress.kubernetes.io/mirror-request-body: "off"
```
Also by default header Host for mirrored requests will be set the same as a host part of uri in the "mirror-target" annotation. You can override it by "mirror-host" annotation:
```yaml
nginx.ingress.kubernetes.io/mirror-target: https://1.2.3.4$request_uri
nginx.ingress.kubernetes.io/mirror-host: "test.env.com"
```
**Note:** The mirror directive will be applied to all paths within the ingress resource.
The request sent to the mirror is linked to the original request. If you have a slow mirror backend, then the original request will throttle.
For more information on the mirror module see [ngx_http_mirror_module](https://nginx.org/en/docs/http/ngx_http_mirror_module.html)
### Stream snippet
Using the annotation `nginx.ingress.kubernetes.io/stream-snippet` it is possible to add custom stream configuration.
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/stream-snippet: |
server {
listen 8000;
proxy_pass 127.0.0.1:80;
}
``` | ingress nginx | Annotations You can add these Kubernetes annotations to specific Ingress objects to customize their behavior tip Annotation keys and values can only be strings Other types such as boolean or numeric values must be quoted i e true false 100 note The annotation prefix can be changed using the annotations prefix command line argument cli arguments md but the default is nginx ingress kubernetes io as described in the table below Name type nginx ingress kubernetes io app root rewrite string nginx ingress kubernetes io affinity session affinity cookie nginx ingress kubernetes io affinity mode session affinity balanced or persistent nginx ingress kubernetes io affinity canary behavior session affinity sticky or legacy nginx ingress kubernetes io auth realm authentication string nginx ingress kubernetes io auth secret authentication string nginx ingress kubernetes io auth secret type authentication string nginx ingress kubernetes io auth type authentication basic or digest nginx ingress kubernetes io auth tls secret client certificate authentication string nginx ingress kubernetes io auth tls verify depth client certificate authentication number nginx ingress kubernetes io auth tls verify client client certificate authentication string nginx ingress kubernetes io auth tls error page client certificate authentication string nginx ingress kubernetes io auth tls pass certificate to upstream client certificate authentication true or false nginx ingress kubernetes io auth tls match cn client certificate authentication string nginx ingress kubernetes io auth url external authentication string nginx ingress kubernetes io auth cache key external authentication string nginx ingress kubernetes io auth cache duration external authentication string nginx ingress kubernetes io auth keepalive external authentication number nginx ingress kubernetes io auth keepalive share vars external authentication true or false nginx ingress kubernetes io auth keepalive requests external authentication number nginx ingress kubernetes io auth keepalive timeout external authentication number nginx ingress kubernetes io auth proxy set headers external authentication string nginx ingress kubernetes io auth snippet external authentication string nginx ingress kubernetes io enable global auth external authentication true or false nginx ingress kubernetes io backend protocol backend protocol string nginx ingress kubernetes io canary canary true or false nginx ingress kubernetes io canary by header canary string nginx ingress kubernetes io canary by header value canary string nginx ingress kubernetes io canary by header pattern canary string nginx ingress kubernetes io canary by cookie canary string nginx ingress kubernetes io canary weight canary number nginx ingress kubernetes io canary weight total canary number nginx ingress kubernetes io client body buffer size client body buffer size string nginx ingress kubernetes io configuration snippet configuration snippet string nginx ingress kubernetes io custom http errors custom http errors int nginx ingress kubernetes io custom headers custom headers string nginx ingress kubernetes io default backend default backend string nginx ingress kubernetes io enable cors enable cors true or false nginx ingress kubernetes io cors allow origin enable cors string nginx ingress kubernetes io cors allow methods enable cors string nginx ingress kubernetes io cors allow headers enable cors string nginx ingress kubernetes io cors expose headers enable cors string nginx ingress kubernetes io cors allow credentials enable cors true or false nginx ingress kubernetes io cors max age enable cors number nginx ingress kubernetes io force ssl redirect server side https enforcement through redirect true or false nginx ingress kubernetes io from to www redirect redirect fromto www true or false nginx ingress kubernetes io http2 push preload http2 push preload true or false nginx ingress kubernetes io limit connections rate limiting number nginx ingress kubernetes io limit rps rate limiting number nginx ingress kubernetes io permanent redirect permanent redirect string nginx ingress kubernetes io permanent redirect code permanent redirect code number nginx ingress kubernetes io temporal redirect temporal redirect string nginx ingress kubernetes io temporal redirect code temporal redirect code number nginx ingress kubernetes io preserve trailing slash server side https enforcement through redirect true or false nginx ingress kubernetes io proxy body size custom max body size string nginx ingress kubernetes io proxy cookie domain proxy cookie domain string nginx ingress kubernetes io proxy cookie path proxy cookie path string nginx ingress kubernetes io proxy connect timeout custom timeouts number nginx ingress kubernetes io proxy send timeout custom timeouts number nginx ingress kubernetes io proxy read timeout custom timeouts number nginx ingress kubernetes io proxy next upstream custom timeouts string nginx ingress kubernetes io proxy next upstream timeout custom timeouts number nginx ingress kubernetes io proxy next upstream tries custom timeouts number nginx ingress kubernetes io proxy request buffering custom timeouts string nginx ingress kubernetes io proxy redirect from proxy redirect string nginx ingress kubernetes io proxy redirect to proxy redirect string nginx ingress kubernetes io proxy http version proxy http version 1 0 or 1 1 nginx ingress kubernetes io proxy ssl secret backend certificate authentication string nginx ingress kubernetes io proxy ssl ciphers backend certificate authentication string nginx ingress kubernetes io proxy ssl name backend certificate authentication string nginx ingress kubernetes io proxy ssl protocols backend certificate authentication string nginx ingress kubernetes io proxy ssl verify backend certificate authentication string nginx ingress kubernetes io proxy ssl verify depth backend certificate authentication number nginx ingress kubernetes io proxy ssl server name backend certificate authentication string nginx ingress kubernetes io enable rewrite log enable rewrite log true or false nginx ingress kubernetes io rewrite target rewrite URI nginx ingress kubernetes io satisfy satisfy string nginx ingress kubernetes io server alias server alias string nginx ingress kubernetes io server snippet server snippet string nginx ingress kubernetes io service upstream service upstream true or false nginx ingress kubernetes io session cookie change on failure cookie affinity true or false nginx ingress kubernetes io session cookie conditional samesite none cookie affinity true or false nginx ingress kubernetes io session cookie domain cookie affinity string nginx ingress kubernetes io session cookie expires cookie affinity string nginx ingress kubernetes io session cookie max age cookie affinity string nginx ingress kubernetes io session cookie name cookie affinity string default INGRESSCOOKIE nginx ingress kubernetes io session cookie path cookie affinity string nginx ingress kubernetes io session cookie samesite cookie affinity string None Lax or Strict nginx ingress kubernetes io session cookie secure cookie affinity string nginx ingress kubernetes io ssl redirect server side https enforcement through redirect true or false nginx ingress kubernetes io ssl passthrough ssl passthrough true or false nginx ingress kubernetes io stream snippet stream snippet string nginx ingress kubernetes io upstream hash by custom nginx upstream hashing string nginx ingress kubernetes io x forwarded prefix x forwarded prefix header string nginx ingress kubernetes io load balance custom nginx load balancing string nginx ingress kubernetes io upstream vhost custom nginx upstream vhost string nginx ingress kubernetes io denylist source range denylist source range CIDR nginx ingress kubernetes io whitelist source range whitelist source range CIDR nginx ingress kubernetes io proxy buffering proxy buffering string nginx ingress kubernetes io proxy buffers number proxy buffers number number nginx ingress kubernetes io proxy buffer size proxy buffer size string nginx ingress kubernetes io proxy busy buffers size proxy busy buffers size string nginx ingress kubernetes io proxy max temp file size proxy max temp file size string nginx ingress kubernetes io ssl ciphers ssl ciphers string nginx ingress kubernetes io ssl prefer server ciphers ssl ciphers true or false nginx ingress kubernetes io connection proxy header connection proxy header string nginx ingress kubernetes io enable access log enable access log true or false nginx ingress kubernetes io enable opentelemetry enable opentelemetry true or false nginx ingress kubernetes io opentelemetry trust incoming span opentelemetry trust incoming spans true or false nginx ingress kubernetes io use regex use regex bool nginx ingress kubernetes io enable modsecurity modsecurity bool nginx ingress kubernetes io enable owasp core rules modsecurity bool nginx ingress kubernetes io modsecurity transaction id modsecurity string nginx ingress kubernetes io modsecurity snippet modsecurity string nginx ingress kubernetes io mirror request body mirror string nginx ingress kubernetes io mirror target mirror string nginx ingress kubernetes io mirror host mirror string Canary In some cases you may want to canary a new set of changes by sending a small number of requests to a different service than the production service The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied The following annotations to configure canary can be enabled after nginx ingress kubernetes io canary true is set nginx ingress kubernetes io canary by header The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress When the request header is set to always it will be routed to the canary When the header is set to never it will never be routed to the canary For any other value the header will be ignored and the request compared against the other canary rules by precedence nginx ingress kubernetes io canary by header value The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress When the request header is set to this value it will be routed to the canary For any other header value the header will be ignored and the request compared against the other canary rules by precedence This annotation has to be used together with nginx ingress kubernetes io canary by header The annotation is an extension of the nginx ingress kubernetes io canary by header to allow customizing the header value instead of using hardcoded values It doesn t have any effect if the nginx ingress kubernetes io canary by header annotation is not defined nginx ingress kubernetes io canary by header pattern This works the same way as canary by header value except it does PCRE Regex matching Note that when canary by header value is set this annotation will be ignored When the given Regex causes error during request processing the request will be considered as not matching nginx ingress kubernetes io canary by cookie The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress When the cookie value is set to always it will be routed to the canary When the cookie is set to never it will never be routed to the canary For any other value the cookie will be ignored and the request compared against the other canary rules by precedence nginx ingress kubernetes io canary weight The integer based 0 weight total percent of random requests that should be routed to the service specified in the canary Ingress A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule A weight of weight total means implies all requests will be sent to the alternative service specified in the Ingress weight total defaults to 100 and can be increased via nginx ingress kubernetes io canary weight total nginx ingress kubernetes io canary weight total The total weight of traffic If unspecified it defaults to 100 Canary rules are evaluated in order of precedence Precedence is as follows canary by header canary by cookie canary weight Note that when you mark an ingress as canary then all the other non canary annotations will be ignored inherited from the corresponding main ingress except nginx ingress kubernetes io load balance nginx ingress kubernetes io upstream hash by and annotations related to session affinity session affinity If you want to restore the original behavior of canaries when session affinity was ignored set nginx ingress kubernetes io affinity canary behavior annotation with value legacy on the canary ingress definition Known Limitations Currently a maximum of one canary ingress can be applied per Ingress rule Rewrite In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule Without a rewrite any request will return 404 Set the annotation nginx ingress kubernetes io rewrite target to the path expected by the service If the Application Root is exposed in a different path and needs to be redirected set the annotation nginx ingress kubernetes io app root to redirect requests for example Please check the rewrite examples rewrite README md example Session Affinity The annotation nginx ingress kubernetes io affinity enables and sets the affinity type in all Upstreams of an Ingress This way a request will always be directed to the same upstream server The only affinity type available for NGINX is cookie The annotation nginx ingress kubernetes io affinity mode defines the stickiness of a session Setting this to balanced default will redistribute some sessions if a deployment gets scaled up therefore rebalancing the load on the servers Setting this to persistent will not rebalance sessions to new servers therefore providing maximum stickiness The annotation nginx ingress kubernetes io affinity canary behavior defines the behavior of canaries when session affinity is enabled Setting this to sticky default will ensure that users that were served by canaries will continue to be served by canaries Setting this to legacy will restore original canary behavior when session affinity was ignored attention If more than one Ingress is defined for a host and at least one Ingress uses nginx ingress kubernetes io affinity cookie then only paths on the Ingress using nginx ingress kubernetes io affinity will use session cookie affinity All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server example Please check the affinity examples affinity cookie README md example Cookie affinity If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx ingress kubernetes io session cookie name The default is to create a cookie named INGRESSCOOKIE The NGINX annotation nginx ingress kubernetes io session cookie path defines the path that will be set on the cookie This is optional unless the annotation nginx ingress kubernetes io use regex is set to true Session cookie paths do not support regex Use nginx ingress kubernetes io session cookie domain to set the Domain attribute of the sticky cookie Use nginx ingress kubernetes io session cookie samesite to apply a SameSite attribute to the sticky cookie Browser accepted values are None Lax and Strict Some browsers reject cookies with SameSite None including those created before the SameSite None specification e g Chrome 5X Other browsers mistakenly treat SameSite None cookies as SameSite Strict e g Safari running on OSX 14 To omit SameSite None from browsers with these incompatibilities add the annotation nginx ingress kubernetes io session cookie conditional samesite none true Use nginx ingress kubernetes io session cookie expires to control the cookie expires its value is a number of seconds until the cookie expires Use nginx ingress kubernetes io session cookie path to control the cookie path when use regex is set to true Use nginx ingress kubernetes io session cookie change on failure to control the cookie change after request failure Authentication It is possible to add authentication by adding additional annotations in the Ingress rule The source of the authentication is a secret that contains usernames and passwords The annotations are nginx ingress kubernetes io auth type basic digest Indicates the HTTP Authentication Type Basic or Digest Access Authentication https tools ietf org html rfc2617 nginx ingress kubernetes io auth secret secretName The name of the Secret that contains the usernames and passwords which are granted access to the path s defined in the Ingress rules This annotation also accepts the alternative form namespace secretName in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace nginx ingress kubernetes io auth secret type auth file auth map The auth secret can have two forms auth file default an htpasswd file in the key auth within the secret auth map the keys of the secret are the usernames and the values are the hashed passwords nginx ingress kubernetes io auth realm realm string example Please check the auth examples auth basic README md example Custom NGINX upstream hashing NGINX supports load balancing by client server mapping based on consistent hashing https nginx org en docs http ngx http upstream module html hash for a given key The key can contain text variables or any combination thereof This feature allows for request stickiness other than client IP or cookies The ketama https www last fm user RJ journal 2007 04 10 rz libketama a consistent hashing algo for memcache clients consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes There is a special mode of upstream hashing called subset In this mode upstream servers are grouped into subsets and stickiness works by mapping keys to a subset instead of individual upstream servers Specific server is chosen uniformly at random from the selected sticky subset It provides a balance between stickiness and load distribution To enable consistent hashing for a backend nginx ingress kubernetes io upstream hash by the nginx variable text value or any combination thereof to use for consistent hashing For example nginx ingress kubernetes io upstream hash by request uri or nginx ingress kubernetes io upstream hash by request uri host or nginx ingress kubernetes io upstream hash by request uri text value to consistently hash upstream requests by the current request URI subset hashing can be enabled setting nginx ingress kubernetes io upstream hash by subset true This maps requests to subset of nodes instead of a single one nginx ingress kubernetes io upstream hash by subset size determines the size of each subset default 3 Please check the chashsubset examples chashsubset deployment yaml example Custom NGINX load balancing This is similar to load balance in ConfigMap configmap md load balance but configures load balancing algorithm per ingress Note that nginx ingress kubernetes io upstream hash by takes preference over this If this and nginx ingress kubernetes io upstream hash by are not set then we fallback to using globally configured load balancing algorithm Custom NGINX upstream vhost This configuration setting allows you to control the value for host in the following statement proxy set header Host host which forms part of the location block This is useful if you need to call the upstream server by something other than host Client Certificate Authentication It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule Client Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths To enable add the annotation nginx ingress kubernetes io auth tls secret namespace secretName This secret must have a file named ca crt containing the full Certificate Authority chain ca crt that is enabled to authenticate against this Ingress You can further customize client certificate authentication and behavior with these annotations nginx ingress kubernetes io auth tls verify depth The validation depth between the provided client certificate and the Certification Authority chain default 1 nginx ingress kubernetes io auth tls verify client Enables verification of client certificates Possible values are on Request a client certificate that must be signed by a certificate that is included in the secret key ca crt of the secret specified by nginx ingress kubernetes io auth tls secret namespace secretName Failed certificate verification will result in a status code 400 Bad Request default off Don t request client certificates and don t do client certificate verification optional Do optional client certificate validation against the CAs from auth tls secret The request fails with status code 400 Bad Request when a certificate is provided that is not signed by the CA When no or an otherwise invalid certificate is provided the request does not fail but instead the verification result is sent to the upstream service optional no ca Do optional client certificate validation but do not fail the request when the client certificate is not signed by the CAs from auth tls secret Certificate verification result is sent to the upstream service nginx ingress kubernetes io auth tls error page The URL Page that user should be redirected in case of a Certificate Authentication Error nginx ingress kubernetes io auth tls pass certificate to upstream Indicates if the received certificates should be passed or not to the upstream server in the header ssl client cert Possible values are true or false default nginx ingress kubernetes io auth tls match cn Adds a sanity check for the CN of the client certificate that is sent over using a string regex starting with CN example CN myvalidclient If the certificate CN sent during mTLS does not match your string regex it will fail with status code 403 Another way of using this is by adding multiple options in your regex example CN option1 option2 myvalidclient In this case as long as one of the options in the brackets matches the certificate CN then you will receive a 200 status code The following headers are sent to the upstream service according to the auth tls annotations ssl client issuer dn The issuer information of the client certificate Example CN My CA ssl client subject dn The subject information of the client certificate Example CN My Client ssl client verify The result of the client verification Possible values SUCCESS FAILED description why the verification failed ssl client cert The full client certificate in PEM format Will only be sent when nginx ingress kubernetes io auth tls pass certificate to upstream is set to true Example BEGIN 20CERTIFICATE 0A END 20CERTIFICATE 0A example Please check the client certs examples auth client certs README md example attention TLS with Client Authentication is not possible in Cloudflare and might result in unexpected behavior Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate https blog cloudflare com protecting the origin with tls authenticated origin pulls https blog cloudflare com protecting the origin with tls authenticated origin pulls Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial https support cloudflare com hc en us articles 204494148 Setting up NGINX to use TLS Authenticated Origin Pulls https web archive org web 20200907143649 https support cloudflare com hc en us articles 204899617 Setting up NGINX to use TLS Authenticated Origin Pulls section5 Backend Certificate Authentication It is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule nginx ingress kubernetes io proxy ssl secret secretName Specifies a Secret with the certificate tls crt key tls key in PEM format used for authentication to a proxied HTTPS server It should also contain trusted CA certificates ca crt in PEM format used to verify the certificate of the proxied HTTPS server This annotation expects the Secret name in the form namespace secretName nginx ingress kubernetes io proxy ssl verify Enables or disables verification of the proxied HTTPS server certificate default off nginx ingress kubernetes io proxy ssl verify depth Sets the verification depth in the proxied HTTPS server certificates chain default 1 nginx ingress kubernetes io proxy ssl ciphers Specifies the enabled ciphers https nginx org en docs http ngx http proxy module html proxy ssl ciphers for requests to a proxied HTTPS server The ciphers are specified in the format understood by the OpenSSL library nginx ingress kubernetes io proxy ssl name Allows to set proxy ssl name https nginx org en docs http ngx http proxy module html proxy ssl name This allows overriding the server name used to verify the certificate of the proxied HTTPS server This value is also passed through SNI when a connection is established to the proxied HTTPS server nginx ingress kubernetes io proxy ssl protocols Enables the specified protocols https nginx org en docs http ngx http proxy module html proxy ssl protocols for requests to a proxied HTTPS server nginx ingress kubernetes io proxy ssl server name Enables passing of the server name through TLS Server Name Indication extension SNI RFC 6066 when establishing a connection with the proxied HTTPS server Configuration snippet Using this annotation you can add additional configuration to the NGINX location For example yaml nginx ingress kubernetes io configuration snippet more set headers Request Id req id Be aware this can be dangerous in multi tenant clusters as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster The recommended mitigation for this threat is to disable this feature so it may not work for you See CVE 2021 25742 and the related issue on github https github com kubernetes ingress nginx issues 7837 for more information Custom HTTP Errors Like the custom http errors configmap md custom http errors value in the ConfigMap this annotation will set NGINX proxy intercept errors but only for the NGINX location associated with this ingress If a default backend annotation default backend is specified on the ingress the errors will be routed to that annotation s default backend service instead of the global default backend Different ingresses can specify different sets of error codes Even if multiple ingress objects share the same hostname this annotation can be used to intercept different error codes for each ingress for example different error codes to be intercepted for different paths on the same hostname if each path is on a different ingress If custom http errors is also specified globally the error values specified in this annotation will override the global value for the given ingress hostname and path Example usage nginx ingress kubernetes io custom http errors 404 415 Custom Headers This annotation is of the form nginx ingress kubernetes io custom headers namespace custom headers configmap to specify a namespace and configmap name that contains custom headers This annotation uses more set headers nginx directive Example annotation for following example configmap yaml nginx ingress kubernetes io custom headers default custom headers configmap Example configmap yaml apiVersion v1 data Content Type application json kind ConfigMap metadata name custom headers configmap namespace default attention First define the allowed response headers in global allowed response headers https github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md global allowed response headers Default Backend This annotation is of the form nginx ingress kubernetes io default backend svc name to specify a custom default backend This svc name is a reference to a service inside of the same namespace in which you are applying this annotation This annotation overrides the global default backend In case the service has multiple ports https kubernetes io docs concepts services networking service multi port services the first one is the one which will receive the backend traffic This service will be used to handle the response when the configured service in the Ingress rule does not have any active endpoints It will also be used to handle the error responses if both this annotation and the custom http errors annotation custom http errors are set Enable CORS To enable Cross Origin Resource Sharing CORS in an Ingress rule add the annotation nginx ingress kubernetes io enable cors true This will add a section in the server location enabling this functionality CORS can be controlled with the following annotations nginx ingress kubernetes io cors allow methods Controls which methods are accepted This is a multi valued field separated by and accepts only letters upper and lower case Default GET PUT POST DELETE PATCH OPTIONS Example nginx ingress kubernetes io cors allow methods PUT GET POST OPTIONS nginx ingress kubernetes io cors allow headers Controls which headers are accepted This is a multi valued field separated by and accepts letters numbers and Default DNT Keep Alive User Agent X Requested With If Modified Since Cache Control Content Type Range Authorization Example nginx ingress kubernetes io cors allow headers X Forwarded For X app123 XPTO nginx ingress kubernetes io cors expose headers Controls which headers are exposed to response This is a multi valued field separated by and accepts letters numbers and Default empty Example nginx ingress kubernetes io cors expose headers X CustomResponseHeader nginx ingress kubernetes io cors allow origin Controls what s the accepted Origin for CORS This is a multi valued field separated by It must follow this format protocol origin site com or protocol origin site com port Default Example nginx ingress kubernetes io cors allow origin https origin site com 4443 http origin site com myprotocol example org 1199 It also supports single level wildcard subdomains and follows this format protocol foo bar protocol bar foo 8080 or protocol abc bar foo 9000 Example nginx ingress kubernetes io cors allow origin https origin site com 4443 http origin site com myprotocol example org 1199 nginx ingress kubernetes io cors allow credentials Controls if credentials can be passed during CORS operations Default true Example nginx ingress kubernetes io cors allow credentials false nginx ingress kubernetes io cors max age Controls how long preflight requests can be cached Default 1728000 Example nginx ingress kubernetes io cors max age 600 note For more information please see https enable cors org https enable cors org server nginx html HTTP2 Push Preload Enables automatic conversion of preload links specified in the Link response header fields into push requests example nginx ingress kubernetes io http2 push preload true Server Alias Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation nginx ingress kubernetes io server alias alias 1 alias 2 This will create a server with the same configuration but adding new values to the server name directive note A server alias name cannot conflict with the hostname of an existing server If it does the server alias annotation will be ignored If a server alias is created and later a new server with the same hostname is created the new server configuration will take place over the alias configuration For more information please see the server name documentation https nginx org en docs http ngx http core module html server name Server snippet Using the annotation nginx ingress kubernetes io server snippet it is possible to add custom configuration in the server configuration block yaml apiVersion networking k8s io v1 kind Ingress metadata annotations nginx ingress kubernetes io server snippet set agentflag 0 if http user agent Mobile set agentflag 1 if agentflag 1 return 301 https m example com attention This annotation can be used only once per host Client Body Buffer Size Sets buffer size for reading client request body per location In case the request body is larger than the buffer the whole body or only its part is written to a temporary file By default buffer size is equal to two memory pages This is 8K on x86 other 32 bit platforms and x86 64 It is usually 16K on other 64 bit platforms This annotation is applied to each location provided in the ingress rule note The annotation value must be given in a format understood by Nginx example nginx ingress kubernetes io client body buffer size 1000 1000 bytes nginx ingress kubernetes io client body buffer size 1k 1 kilobyte nginx ingress kubernetes io client body buffer size 1K 1 kilobyte nginx ingress kubernetes io client body buffer size 1m 1 megabyte nginx ingress kubernetes io client body buffer size 1M 1 megabyte For more information please see https nginx org https nginx org en docs http ngx http core module html client body buffer size External Authentication To use an existing service that provides authentication the Ingress rule can be annotated with nginx ingress kubernetes io auth url to indicate the URL where the HTTP request should be sent yaml nginx ingress kubernetes io auth url URL to the authentication service Additionally it is possible to set nginx ingress kubernetes io auth keepalive Connections to specify the maximum number of keepalive connections to auth url Only takes effect when no variables are used in the host part of the URL Defaults to 0 keepalive disabled Note does not work with HTTP 2 listener because of a limitation in Lua subrequests https github com openresty lua nginx module spdy mode not fully supported UseHTTP2 configmap md use http2 configuration should be disabled nginx ingress kubernetes io auth keepalive share vars Whether to share Nginx variables among the current request and the auth request Example use case is to track requests when set to true X Request ID HTTP header will be the same for the backend and the auth request Defaults to false nginx ingress kubernetes io auth keepalive requests Requests to specify the maximum number of requests that can be served through one keepalive connection Defaults to 1000 and only applied if auth keepalive is set to higher than 0 nginx ingress kubernetes io auth keepalive timeout Timeout to specify a duration in seconds which an idle keepalive connection to an upstream server will stay open Defaults to 60 and only applied if auth keepalive is set to higher than 0 nginx ingress kubernetes io auth method Method to specify the HTTP method to use nginx ingress kubernetes io auth signin SignIn URL to specify the location of the error page nginx ingress kubernetes io auth signin redirect param SignIn URL to specify the URL parameter in the error page which should contain the original URL for a failed signin request nginx ingress kubernetes io auth response headers Response Header 1 Response Header n to specify headers to pass to backend once authentication request completes nginx ingress kubernetes io auth proxy set headers ConfigMap the name of a ConfigMap that specifies headers to pass to the authentication service nginx ingress kubernetes io auth request redirect Request Redirect URL to specify the X Auth Request Redirect header value nginx ingress kubernetes io auth cache key Cache Key this enables caching for auth requests specify a lookup key for auth responses e g remote user http authorization Each server and location has it s own keyspace Hence a cached response is only valid on a per server and per location basis nginx ingress kubernetes io auth cache duration Cache duration to specify a caching time for auth responses based on their response codes e g 200 202 30m See proxy cache valid https nginx org en docs http ngx http proxy module html proxy cache valid for details You may specify multiple comma separated values 200 202 10m 401 5m defaults to 200 202 401 5m nginx ingress kubernetes io auth always set cookie Boolean Flag to set a cookie returned by auth request By default the cookie will be set only if an upstream reports with the code 200 201 204 206 301 302 303 304 307 or 308 nginx ingress kubernetes io auth snippet Auth Snippet to specify a custom snippet to use with external authentication e g yaml nginx ingress kubernetes io auth url http foo com external auth nginx ingress kubernetes io auth snippet proxy set header Foo Header 42 Note nginx ingress kubernetes io auth snippet is an optional annotation However it may only be used in conjunction with nginx ingress kubernetes io auth url and will be ignored if nginx ingress kubernetes io auth url is not set example Please check the external auth examples auth external auth README md example Global External Authentication By default the controller redirects all requests to an existing service that provides authentication if global auth url is set in the NGINX ConfigMap If you want to disable this behavior for that ingress you can use enable global auth false in the NGINX ConfigMap nginx ingress kubernetes io enable global auth indicates if GlobalExternalAuth configuration should be applied or not to this Ingress rule Default values is set to true note For more information please see global auth url configmap md global auth url Rate Limiting These annotations define limits on connections and transmission rates These can be used to mitigate DDoS Attacks https www nginx com blog mitigating ddos attacks with nginx and nginx plus nginx ingress kubernetes io limit connections number of concurrent connections allowed from a single IP address A 503 error is returned when exceeding this limit nginx ingress kubernetes io limit rps number of requests accepted from a given IP each second The burst limit is set to this limit multiplied by the burst multiplier the default multiplier is 5 When clients exceed this limit limit req status code https kubernetes github io ingress nginx user guide nginx configuration configmap limit req status code default 503 is returned nginx ingress kubernetes io limit rpm number of requests accepted from a given IP each minute The burst limit is set to this limit multiplied by the burst multiplier the default multiplier is 5 When clients exceed this limit limit req status code https kubernetes github io ingress nginx user guide nginx configuration configmap limit req status code default 503 is returned nginx ingress kubernetes io limit burst multiplier multiplier of the limit rate for burst size The default burst multiplier is 5 this annotation override the default multiplier When clients exceed this limit limit req status code https kubernetes github io ingress nginx user guide nginx configuration configmap limit req status code default 503 is returned nginx ingress kubernetes io limit rate after initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited This feature must be used with proxy buffering proxy buffering enabled nginx ingress kubernetes io limit rate number of kilobytes per second allowed to send to a given connection The zero value disables rate limiting This feature must be used with proxy buffering proxy buffering enabled nginx ingress kubernetes io limit whitelist client IP source ranges to be excluded from rate limiting The value is a comma separated list of CIDRs If you specify multiple annotations in a single Ingress rule limits are applied in the order limit connections limit rpm limit rps To configure settings globally for all Ingress rules the limit rate after and limit rate values may be set in the NGINX ConfigMap configmap md limit rate The value set in an Ingress annotation will override the global setting The client IP address will be set based on the use of PROXY protocol configmap md use proxy protocol or from the X Forwarded For header value when use forwarded headers configmap md use forwarded headers is enabled Permanent Redirect This annotation allows to return a permanent redirect Return Code 301 instead of sending data to the upstream For example nginx ingress kubernetes io permanent redirect https www google com would redirect everything to Google Permanent Redirect Code This annotation allows you to modify the status code used for permanent redirects For example nginx ingress kubernetes io permanent redirect code 308 would return your permanent redirect with a 308 Temporal Redirect This annotation allows you to return a temporal redirect Return Code 302 instead of sending data to the upstream For example nginx ingress kubernetes io temporal redirect https www google com would redirect everything to Google with a Return Code of 302 Moved Temporarily Temporal Redirect Code This annotation allows you to modify the status code used for temporal redirects For example nginx ingress kubernetes io temporal redirect code 307 would return your temporal redirect with a 307 SSL Passthrough The annotation nginx ingress kubernetes io ssl passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication See also TLS HTTPS tls md ssl passthrough in the User guide note SSL Passthrough is disabled by default and requires starting the controller with the enable ssl passthrough cli arguments md flag attention Because SSL Passthrough works on layer 4 of the OSI model TCP and not on the layer 7 HTTP using SSL Passthrough invalidates all the other annotations set on an Ingress object Service Upstream By default the Ingress Nginx Controller uses a list of all endpoints Pod IP port in the NGINX upstream configuration The nginx ingress kubernetes io service upstream annotation disables that behavior and instead uses a single upstream in NGINX the service s Cluster IP and port This can be desirable for things like zero downtime deployments See issue 257 https github com kubernetes ingress nginx issues 257 Known Issues If the service upstream annotation is specified the following things should be taken into consideration Sticky Sessions will not work as only round robin load balancing is supported The proxy next upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream Server side HTTPS enforcement through redirect By default the controller redirects 308 to HTTPS if TLS is enabled for that ingress If you want to disable this behavior globally you can use ssl redirect false in the NGINX ConfigMap configmap md ssl redirect To configure this feature for specific ingress resources you can use the nginx ingress kubernetes io ssl redirect false annotation in the particular resource When using SSL offloading outside of cluster e g AWS ELB it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available This can be achieved by using the nginx ingress kubernetes io force ssl redirect true annotation in the particular resource To preserve the trailing slash in the URI with ssl redirect set nginx ingress kubernetes io preserve trailing slash true annotation for that particular resource Redirect from to www In some scenarios it is required to redirect from www domain com to domain com or vice versa which way the redirect is performed depends on the configured host value in the Ingress object For example if spec rules host is configured with a value like www example com then this annotation will redirect from example com to www example com If spec rules host is configured with a value like example com so without a www then this annotation will redirect from www example com to example com instead To enable this feature use the annotation nginx ingress kubernetes io from to www redirect true attention If at some point a new Ingress is created with a host equal to one of the options like domain com the annotation will be omitted attention For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret located in the TLS section of Ingress contains both FQDN in the common name of the certificate Denylist source range You can specify blocked client IP source ranges through the nginx ingress kubernetes io denylist source range annotation The value is a comma separated list of CIDRs https en wikipedia org wiki Classless Inter Domain Routing e g 10 0 0 0 24 172 10 0 1 To configure this setting globally for all Ingress rules the denylist source range value may be set in the NGINX ConfigMap configmap md denylist source range note Adding an annotation to an Ingress rule overrides any global restriction Whitelist source range You can specify allowed client IP source ranges through the nginx ingress kubernetes io whitelist source range annotation The value is a comma separated list of CIDRs https en wikipedia org wiki Classless Inter Domain Routing e g 10 0 0 0 24 172 10 0 1 To configure this setting globally for all Ingress rules the whitelist source range value may be set in the NGINX ConfigMap configmap md whitelist source range note Adding an annotation to an Ingress rule overrides any global restriction Custom timeouts Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers In some scenarios is required to have different values To allow this we provide annotations that allows this customization nginx ingress kubernetes io proxy connect timeout nginx ingress kubernetes io proxy send timeout nginx ingress kubernetes io proxy read timeout nginx ingress kubernetes io proxy next upstream nginx ingress kubernetes io proxy next upstream timeout nginx ingress kubernetes io proxy next upstream tries nginx ingress kubernetes io proxy request buffering If you indicate Backend Protocol backend protocol as GRPC or GRPCS the following grpc values will be set and inherited from proxy timeouts grpc connect timeout 5s https nginx org en docs http ngx http grpc module html grpc connect timeout from nginx ingress kubernetes io proxy connect timeout grpc send timeout 60s https nginx org en docs http ngx http grpc module html grpc send timeout from nginx ingress kubernetes io proxy send timeout grpc read timeout 60s https nginx org en docs http ngx http grpc module html grpc read timeout from nginx ingress kubernetes io proxy read timeout Note All timeout values are unitless and in seconds e g nginx ingress kubernetes io proxy read timeout 120 sets a valid 120 seconds proxy read timeout Proxy redirect The annotations nginx ingress kubernetes io proxy redirect from and nginx ingress kubernetes io proxy redirect to will set the first and second parameters of NGINX s proxy redirect directive respectively It is possible to set the text that should be changed in the Location and Refresh header fields of a proxied server response https nginx org en docs http ngx http proxy module html proxy redirect Setting off or default in the annotation nginx ingress kubernetes io proxy redirect from disables nginx ingress kubernetes io proxy redirect to otherwise both annotations must be used in unison Note that each annotation must be a string without spaces By default the value of each annotation is off Custom max body size For NGINX an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body This size can be configured by the parameter client max body size https nginx org en docs http ngx http core module html client max body size To configure this setting globally for all Ingress rules the proxy body size value may be set in the NGINX ConfigMap configmap md proxy body size To use custom values in an Ingress rule define these annotation yaml nginx ingress kubernetes io proxy body size 8m Proxy cookie domain Sets a text that should be changed in the domain attribute https nginx org en docs http ngx http proxy module html proxy cookie domain of the Set Cookie header fields of a proxied server response To configure this setting globally for all Ingress rules the proxy cookie domain value may be set in the NGINX ConfigMap configmap md proxy cookie domain Proxy cookie path Sets a text that should be changed in the path attribute https nginx org en docs http ngx http proxy module html proxy cookie path of the Set Cookie header fields of a proxied server response To configure this setting globally for all Ingress rules the proxy cookie path value may be set in the NGINX ConfigMap configmap md proxy cookie path Proxy buffering Enable or disable proxy buffering proxy buffering https nginx org en docs http ngx http proxy module html proxy buffering By default proxy buffering is disabled in the NGINX config To configure this setting globally for all Ingress rules the proxy buffering value may be set in the NGINX ConfigMap configmap md proxy buffering To use custom values in an Ingress rule define these annotation yaml nginx ingress kubernetes io proxy buffering on Proxy buffers Number Sets the number of the buffers in proxy buffers https nginx org en docs http ngx http proxy module html proxy buffers used for reading the first part of the response received from the proxied server By default proxy buffers number is set as 4 To configure this setting globally set proxy buffers number in NGINX ConfigMap configmap md proxy buffers number To use custom values in an Ingress rule define this annotation yaml nginx ingress kubernetes io proxy buffers number 4 Proxy buffer size Sets the size of the buffer proxy buffer size https nginx org en docs http ngx http proxy module html proxy buffer size used for reading the first part of the response received from the proxied server By default proxy buffer size is set as 4k To configure this setting globally set proxy buffer size in NGINX ConfigMap configmap md proxy buffer size To use custom values in an Ingress rule define this annotation yaml nginx ingress kubernetes io proxy buffer size 8k Proxy busy buffers size Limits the total size of buffers that can be busy https nginx org en docs http ngx http proxy module html proxy busy buffers size sending a response to the client while the response is not yet fully read By default proxy busy buffers size is set as 8k To configure this setting globally set proxy busy buffers size in the ConfigMap configmap md proxy busy buffers size To use custom values in an Ingress rule define this annotation yaml nginx ingress kubernetes io proxy busy buffers size 16k Proxy max temp file size When buffering https nginx org en docs http ngx http proxy module html proxy buffering of responses from the proxied server is enabled and the whole response does not fit into the buffers set by the proxy buffer size https nginx org en docs http ngx http proxy module html proxy buffer size and proxy buffers https nginx org en docs http ngx http proxy module html proxy buffers directives a part of the response can be saved to a temporary file This directive sets the maximum size of the temporary file setting the proxy max temp file size https nginx org en docs http ngx http proxy module html proxy max temp file size The size of data written to the temporary file at a time is set by the proxy temp file write size https nginx org en docs http ngx http proxy module html proxy temp file write size directive The zero value disables buffering of responses to temporary files To use custom values in an Ingress rule define this annotation yaml nginx ingress kubernetes io proxy max temp file size 1024m Proxy HTTP version Using this annotation sets the proxy http version https nginx org en docs http ngx http proxy module html proxy http version that the Nginx reverse proxy will use to communicate with the backend By default this is set to 1 1 yaml nginx ingress kubernetes io proxy http version 1 0 SSL ciphers Specifies the enabled ciphers https nginx org en docs http ngx http ssl module html ssl ciphers Using this annotation will set the ssl ciphers directive at the server level This configuration is active for all the paths in the host yaml nginx ingress kubernetes io ssl ciphers ALL aNULL EXPORT56 RC4 RSA HIGH MEDIUM LOW SSLv2 EXP The following annotation will set the ssl prefer server ciphers directive at the server level This configuration specifies that server ciphers should be preferred over client ciphers when using the SSLv3 and TLS protocols yaml nginx ingress kubernetes io ssl prefer server ciphers true Connection proxy header Using this annotation will override the default connection header set by NGINX To use custom values in an Ingress rule define the annotation yaml nginx ingress kubernetes io connection proxy header keep alive Enable Access Log Access logs are enabled by default but in some scenarios access logs might be required to be disabled for a given ingress To do this use the annotation yaml nginx ingress kubernetes io enable access log false Enable Rewrite Log Rewrite logs are not enabled by default In some scenarios it could be required to enable NGINX rewrite logs Note that rewrite logs are sent to the error log file at the notice level To enable this feature use the annotation yaml nginx ingress kubernetes io enable rewrite log true Enable Opentelemetry Opentelemetry can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress e g to turn off telemetry of external health check endpoints yaml nginx ingress kubernetes io enable opentelemetry true Opentelemetry Trust Incoming Span The option to trust incoming trace spans can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress e g only enable on a private endpoint yaml nginx ingress kubernetes io opentelemetry trust incoming spans true X Forwarded Prefix Header To add the non standard X Forwarded Prefix header to the upstream request with a string value the following annotation can be used yaml nginx ingress kubernetes io x forwarded prefix path ModSecurity ModSecurity http modsecurity org is an OpenSource Web Application firewall It can be enabled for a particular set of ingress locations The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap configmap md enable modsecurity Note this will enable ModSecurity for all paths and each path must be disabled manually It can be enabled using the following annotation yaml nginx ingress kubernetes io enable modsecurity true ModSecurity will run in Detection Only mode using the recommended configuration https github com SpiderLabs ModSecurity blob v3 master modsecurity conf recommended You can enable the OWASP Core Rule Set https www modsecurity org CRS Documentation by setting the following annotation yaml nginx ingress kubernetes io enable owasp core rules true You can pass transactionIDs from nginx by setting up the following yaml nginx ingress kubernetes io modsecurity transaction id request id You can also add your own set of modsecurity rules via a snippet yaml nginx ingress kubernetes io modsecurity snippet SecRuleEngine On SecDebugLog tmp modsec debug log Note If you use both enable owasp core rules and modsecurity snippet annotations together only the modsecurity snippet will take effect If you wish to include the OWASP Core Rule Set https www modsecurity org CRS Documentation or recommended configuration https github com SpiderLabs ModSecurity blob v3 master modsecurity conf recommended simply use the include statement nginx 0 24 1 and below yaml nginx ingress kubernetes io modsecurity snippet Include etc nginx owasp modsecurity crs nginx modsecurity conf Include etc nginx modsecurity modsecurity conf nginx 0 25 0 and above yaml nginx ingress kubernetes io modsecurity snippet Include etc nginx owasp modsecurity crs nginx modsecurity conf Backend Protocol Using backend protocol annotations is possible to indicate how NGINX should communicate with the backend service Replaces secure backends in older versions Valid Values HTTP HTTPS AUTO HTTP GRPC GRPCS and FCGI By default NGINX uses HTTP Example yaml nginx ingress kubernetes io backend protocol HTTPS Use Regex attention When using this annotation with the NGINX annotation nginx ingress kubernetes io affinity of type cookie nginx ingress kubernetes io session cookie path must be also set Session cookie paths do not support regex Using the nginx ingress kubernetes io use regex annotation will indicate whether or not the paths defined on an Ingress use regular expressions The default value is false The following will indicate that regular expression paths are being used yaml nginx ingress kubernetes io use regex true The following will indicate that regular expression paths are not being used yaml nginx ingress kubernetes io use regex false When this annotation is set to true the case insensitive regular expression location modifier https nginx org en docs http ngx http core module html location will be enforced on ALL paths for a given host regardless of what Ingress they are defined on Additionally if the rewrite target annotation rewrite is used on any Ingress for a given host then the case insensitive regular expression location modifier https nginx org en docs http ngx http core module html location will be enforced on ALL paths for a given host regardless of what Ingress they are defined on Please read about ingress path matching ingress path matching md before using this modifier Satisfy By default a request would need to satisfy all authentication requirements in order to be allowed By using this annotation requests that satisfy either any or all authentication requirements are allowed based on the configuration value yaml nginx ingress kubernetes io satisfy any Mirror Enables a request to be mirrored to a mirror backend Responses by mirror backends are ignored This feature is useful to see how requests will react in test backends The mirror backend can be set by applying yaml nginx ingress kubernetes io mirror target https test env com request uri By default the request body is sent to the mirror backend but can be turned off by applying yaml nginx ingress kubernetes io mirror request body off Also by default header Host for mirrored requests will be set the same as a host part of uri in the mirror target annotation You can override it by mirror host annotation yaml nginx ingress kubernetes io mirror target https 1 2 3 4 request uri nginx ingress kubernetes io mirror host test env com Note The mirror directive will be applied to all paths within the ingress resource The request sent to the mirror is linked to the original request If you have a slow mirror backend then the original request will throttle For more information on the mirror module see ngx http mirror module https nginx org en docs http ngx http mirror module html Stream snippet Using the annotation nginx ingress kubernetes io stream snippet it is possible to add custom stream configuration yaml apiVersion networking k8s io v1 kind Ingress metadata annotations nginx ingress kubernetes io stream snippet server listen 8000 proxy pass 127 0 0 1 80 |
ingress nginx ModSecurity is an open source cross platform web application firewall WAF engine for Apache IIS and Nginx that is developed by Trustwave s SpiderLabs It has a robust event based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring logging and real time analysis The connector is the connection point between NGINX and libmodsecurity ModSecurity v3 To enable the ModSecurity feature we need to specify in the configuration configmap The default ModSecurity configuration file is located in This is the only file located in this directory and contains the default recommended configuration Using a volume we can replace this file with the desired configuration ModSecurity Web Application Firewall Note the default configuration use detection only because that minimizes the chances of post installation disruption | # ModSecurity Web Application Firewall
ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - [https://www.modsecurity.org](https://www.modsecurity.org)
The [ModSecurity-nginx](https://github.com/SpiderLabs/ModSecurity-nginx) connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).
The default ModSecurity configuration file is located in `/etc/nginx/modsecurity/modsecurity.conf`. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration.
To enable the ModSecurity feature we need to specify `enable-modsecurity: "true"` in the configuration configmap.
>__Note:__ the default configuration use detection only, because that minimizes the chances of post-installation disruption.
Due to the value of the setting [SecAuditLogType=Concurrent](https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual-(v2.x)#secauditlogtype) the ModSecurity log is stored in multiple files inside the directory `/var/log/audit`.
The default `Serial` value in SecAuditLogType can impact performance.
The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts.
The directory `/etc/nginx/owasp-modsecurity-crs` contains the [OWASP ModSecurity Core Rule Set repository](https://github.com/coreruleset/coreruleset).
Using `enable-owasp-modsecurity-crs: "true"` we enable the use of the rules.
## Supported annotations
For more info on supported annotations, please see [annotations/#modsecurity](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#modsecurity)
## Example of using ModSecurity with plugins via the helm chart
Suppose you have a ConfigMap that contains the contents of the [nextcloud-rule-exclusions plugin](https://github.com/coreruleset/nextcloud-rule-exclusions-plugin/blob/main/plugins/nextcloud-rule-exclusions-before.conf) like this:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: modsecurity-plugins
data:
empty-after.conf: |
# no data
empty-before.conf: |
# no data
empty-config.conf: |
# no data
nextcloud-rule-exclusions-before.conf:
# this is just a snippet
# find the full file at https://github.com/coreruleset/nextcloud-rule-exclusions-plugin
#
# [ File Manager ]
# The web interface uploads files, and interacts with the user.
SecRule REQUEST_FILENAME "@contains /remote.php/webdav" \
"id:9508102,\
phase:1,\
pass,\
t:none,\
nolog,\
ver:'nextcloud-rule-exclusions-plugin/1.2.0',\
ctl:ruleRemoveById=920420,\
ctl:ruleRemoveById=920440,\
ctl:ruleRemoveById=941000-942999,\
ctl:ruleRemoveById=951000-951999,\
ctl:ruleRemoveById=953100-953130,\
ctl:ruleRemoveByTag=attack-injection-php"
```
If you're using the helm chart, you can pass in the following parameters in your `values.yaml`:
```yaml
controller:
config:
# Enables Modsecurity
enable-modsecurity: "true"
# Update ModSecurity config and rules
modsecurity-snippet: |
# this enables the mod security nextcloud plugin
Include /etc/nginx/owasp-modsecurity-crs/plugins/nextcloud-rule-exclusions-before.conf
# this enables the default OWASP Core Rule Set
Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
# Enable prevention mode. Options: DetectionOnly,On,Off (default is DetectionOnly)
SecRuleEngine On
# Enable scanning of the request body
SecRequestBodyAccess On
# Enable XML and JSON parsing
SecRule REQUEST_HEADERS:Content-Type "(?:text|application(?:/soap\+|/)|application/xml)/" \
"id:200000,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=XML"
SecRule REQUEST_HEADERS:Content-Type "application/json" \
"id:200001,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=JSON"
# Reject if larger (we could also let it pass with ProcessPartial)
SecRequestBodyLimitAction Reject
# Send ModSecurity audit logs to the stdout (only for rejected requests)
SecAuditLog /dev/stdout
# format the logs in JSON
SecAuditLogFormat JSON
# could be On/Off/RelevantOnly
SecAuditEngine RelevantOnly
# Add a volume for the plugins directory
extraVolumes:
- name: plugins
configMap:
name: modsecurity-plugins
# override the /etc/nginx/enable-owasp-modsecurity-crs/plugins with your ConfigMap
extraVolumeMounts:
- name: plugins
mountPath: /etc/nginx/owasp-modsecurity-crs/plugins
``` | ingress nginx | ModSecurity Web Application Firewall ModSecurity is an open source cross platform web application firewall WAF engine for Apache IIS and Nginx that is developed by Trustwave s SpiderLabs It has a robust event based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring logging and real time analysis https www modsecurity org https www modsecurity org The ModSecurity nginx https github com SpiderLabs ModSecurity nginx connector is the connection point between NGINX and libmodsecurity ModSecurity v3 The default ModSecurity configuration file is located in etc nginx modsecurity modsecurity conf This is the only file located in this directory and contains the default recommended configuration Using a volume we can replace this file with the desired configuration To enable the ModSecurity feature we need to specify enable modsecurity true in the configuration configmap Note the default configuration use detection only because that minimizes the chances of post installation disruption Due to the value of the setting SecAuditLogType Concurrent https github com SpiderLabs ModSecurity wiki Reference Manual v2 x secauditlogtype the ModSecurity log is stored in multiple files inside the directory var log audit The default Serial value in SecAuditLogType can impact performance The OWASP ModSecurity Core Rule Set CRS is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls The CRS aims to protect web applications from a wide range of attacks including the OWASP Top Ten with a minimum of false alerts The directory etc nginx owasp modsecurity crs contains the OWASP ModSecurity Core Rule Set repository https github com coreruleset coreruleset Using enable owasp modsecurity crs true we enable the use of the rules Supported annotations For more info on supported annotations please see annotations modsecurity https kubernetes github io ingress nginx user guide nginx configuration annotations modsecurity Example of using ModSecurity with plugins via the helm chart Suppose you have a ConfigMap that contains the contents of the nextcloud rule exclusions plugin https github com coreruleset nextcloud rule exclusions plugin blob main plugins nextcloud rule exclusions before conf like this yaml apiVersion v1 kind ConfigMap metadata name modsecurity plugins data empty after conf no data empty before conf no data empty config conf no data nextcloud rule exclusions before conf this is just a snippet find the full file at https github com coreruleset nextcloud rule exclusions plugin File Manager The web interface uploads files and interacts with the user SecRule REQUEST FILENAME contains remote php webdav id 9508102 phase 1 pass t none nolog ver nextcloud rule exclusions plugin 1 2 0 ctl ruleRemoveById 920420 ctl ruleRemoveById 920440 ctl ruleRemoveById 941000 942999 ctl ruleRemoveById 951000 951999 ctl ruleRemoveById 953100 953130 ctl ruleRemoveByTag attack injection php If you re using the helm chart you can pass in the following parameters in your values yaml yaml controller config Enables Modsecurity enable modsecurity true Update ModSecurity config and rules modsecurity snippet this enables the mod security nextcloud plugin Include etc nginx owasp modsecurity crs plugins nextcloud rule exclusions before conf this enables the default OWASP Core Rule Set Include etc nginx owasp modsecurity crs nginx modsecurity conf Enable prevention mode Options DetectionOnly On Off default is DetectionOnly SecRuleEngine On Enable scanning of the request body SecRequestBodyAccess On Enable XML and JSON parsing SecRule REQUEST HEADERS Content Type text application soap application xml id 200000 phase 1 t none t lowercase pass nolog ctl requestBodyProcessor XML SecRule REQUEST HEADERS Content Type application json id 200001 phase 1 t none t lowercase pass nolog ctl requestBodyProcessor JSON Reject if larger we could also let it pass with ProcessPartial SecRequestBodyLimitAction Reject Send ModSecurity audit logs to the stdout only for rejected requests SecAuditLog dev stdout format the logs in JSON SecAuditLogFormat JSON could be On Off RelevantOnly SecAuditEngine RelevantOnly Add a volume for the plugins directory extraVolumes name plugins configMap name modsecurity plugins override the etc nginx enable owasp modsecurity crs plugins with your ConfigMap extraVolumeMounts name plugins mountPath etc nginx owasp modsecurity crs plugins |
ingress nginx Using the third party module the Ingress Nginx Controller can configure NGINX to enable instrumentation and monitoring purposes OpenTelemetry practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability By default this feature is disabled Check out this demo showcasing OpenTelemetry in Ingress NGINX The video provides an overview and Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project | # OpenTelemetry
Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project.
Using the third party module [opentelemetry-cpp-contrib/nginx](https://github.com/open-telemetry/opentelemetry-cpp-contrib/tree/main/instrumentation/nginx) the Ingress-Nginx Controller can configure NGINX to enable [OpenTelemetry](http://opentelemetry.io) instrumentation.
By default this feature is disabled.
Check out this demo showcasing OpenTelemetry in Ingress NGINX. The video provides an overview and
practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability
and monitoring purposes.
<p align="center">
<a href="https://www.youtube.com/watch?v=jpBfgJpTcfw&t=129" target="_blank" rel="noopener noreferrer">
<img src="https://img.youtube.com/vi/jpBfgJpTcfw/0.jpg" alt="Video Thumbnail" />
</a>
</p>
<p align="center">Demo: OpenTelemetry in Ingress NGINX.</p>
## Usage
To enable the instrumentation we must enable OpenTelemetry in the configuration ConfigMap:
```yaml
data:
enable-opentelemetry: "true"
```
To enable or disable instrumentation for a single Ingress, use
the `enable-opentelemetry` annotation:
```yaml
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/enable-opentelemetry: "true"
```
We must also set the host to use when uploading traces:
```yaml
otlp-collector-host: "otel-coll-collector.otel.svc"
```
NOTE: While the option is called `otlp-collector-host`, you will need to point this to any backend that receives otlp-grpc.
Next you will need to deploy a distributed telemetry system which uses OpenTelemetry.
[opentelemetry-collector](https://github.com/open-telemetry/opentelemetry-collector), [Jaeger](https://www.jaegertracing.io/)
[Tempo](https://github.com/grafana/tempo), and [zipkin](https://zipkin.io/)
have been tested.
Other optional configuration options:
```yaml
# specifies the name to use for the server span
opentelemetry-operation-name
# sets whether or not to trust incoming telemetry spans
opentelemetry-trust-incoming-span
# specifies the port to use when uploading traces, Default: 4317
otlp-collector-port
# specifies the service name to use for any traces created, Default: nginx
otel-service-name
# The maximum queue size. After the size is reached data are dropped.
otel-max-queuesize
# The delay interval in milliseconds between two consecutive exports.
otel-schedule-delay-millis
# How long the export can run before it is cancelled.
otel-schedule-delay-millis
# The maximum batch size of every export. It must be smaller or equal to maxQueueSize.
otel-max-export-batch-size
# specifies sample rate for any traces created, Default: 0.01
otel-sampler-ratio
# specifies the sampler to be used when sampling traces.
# The available samplers are: AlwaysOn, AlwaysOff, TraceIdRatioBased, Default: AlwaysOff
otel-sampler
# Uses sampler implementation which by default will take a sample if parent Activity is sampled, Default: false
otel-sampler-parent-based
```
Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following:
```yaml
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span: "true"
```
## Examples
The following examples show how to deploy and test different distributed telemetry systems. These example can be performed using Docker Desktop.
In the [esigo/nginx-example](https://github.com/esigo/nginx-example)
GitHub repository is an example of a simple hello service:
```mermaid
graph TB
subgraph Browser
start["http://esigo.dev/hello/nginx"]
end
subgraph app
sa[service-a]
sb[service-b]
sa --> |name: nginx| sb
sb --> |hello nginx!| sa
end
subgraph otel
otc["Otel Collector"]
end
subgraph observability
tempo["Tempo"]
grafana["Grafana"]
backend["Jaeger"]
zipkin["Zipkin"]
end
subgraph ingress-nginx
ngx[nginx]
end
subgraph ngx[nginx]
ng[nginx]
om[OpenTelemetry module]
end
subgraph Node
app
otel
observability
ingress-nginx
om --> |otlp-gRPC| otc --> |jaeger| backend
otc --> |zipkin| zipkin
otc --> |otlp-gRPC| tempo --> grafana
sa --> |otlp-gRPC| otc
sb --> |otlp-gRPC| otc
start --> ng --> sa
end
```
To install the example and collectors run:
1. Enable OpenTelemetry and set the otlp-collector-host:
```yaml
$ echo '
apiVersion: v1
kind: ConfigMap
data:
enable-opentelemetry: "true"
opentelemetry-config: "/etc/nginx/opentelemetry.toml"
opentelemetry-operation-name: "HTTP $request_method $service_name $uri"
opentelemetry-trust-incoming-span: "true"
otlp-collector-host: "otel-coll-collector.otel.svc"
otlp-collector-port: "4317"
otel-max-queuesize: "2048"
otel-schedule-delay-millis: "5000"
otel-max-export-batch-size: "512"
otel-service-name: "nginx-proxy" # Opentelemetry resource name
otel-sampler: "AlwaysOn" # Also: AlwaysOff, TraceIdRatioBased
otel-sampler-ratio: "1.0"
otel-sampler-parent-based: "false"
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
' | kubectl replace -f -
```
2. Deploy otel-collector, grafana and Jaeger backend:
```bash
# add helm charts needed for grafana and OpenTelemetry collector
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
# deploy cert-manager needed for OpenTelemetry collector operator
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.3/cert-manager.yaml
# create observability namespace
kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/namespace.yaml
# install OpenTelemetry collector operator
helm upgrade --install otel-collector-operator -n otel --create-namespace open-telemetry/opentelemetry-operator
# deploy OpenTelemetry collector
kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/collector.yaml
# deploy Jaeger all-in-one
kubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.37.0/jaeger-operator.yaml -n observability
kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/jaeger.yaml -n observability
# deploy zipkin
kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/zipkin.yaml -n observability
# deploy tempo and grafana
helm upgrade --install tempo grafana/tempo --create-namespace -n observability
helm upgrade -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/grafana/grafana-values.yaml --install grafana grafana/grafana --create-namespace -n observability
```
3. Build and deploy demo app:
```bash
# build images
make images
# deploy demo app:
make deploy-app
```
4. Make a few requests to the Service:
```bash
kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8090:80
curl http://esigo.dev:8090/hello/nginx
StatusCode : 200
StatusDescription : OK
Content : {"v":"hello nginx!"}
RawContent : HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 21
Content-Type: text/plain; charset=utf-8
Date: Mon, 10 Oct 2022 17:43:33 GMT
{"v":"hello nginx!"}
Forms : {}
Headers : {[Connection, keep-alive], [Content-Length, 21], [Content-Type, text/plain; charset=utf-8], [Date,
Mon, 10 Oct 2022 17:43:33 GMT]}
Images : {}
InputFields : {}
Links : {}
ParsedHtml : System.__ComObject
RawContentLength : 21
```
5. View the Grafana UI:
```bash
kubectl port-forward --namespace=observability service/grafana 3000:80
```
In the Grafana interface we can see the details:

6. View the Jaeger UI:
```bash
kubectl port-forward --namespace=observability service/jaeger-all-in-one-query 16686:16686
```
In the Jaeger interface we can see the details:

7. View the Zipkin UI:
```bash
kubectl port-forward --namespace=observability service/zipkin 9411:9411
```
In the Zipkin interface we can see the details:

## Migration from OpenTracing, Jaeger, Zipkin and Datadog
If you are migrating from OpenTracing, Jaeger, Zipkin, or Datadog to OpenTelemetry,
you may need to update various annotations and configurations. Here are the mappings
for common annotations and configurations:
### Annotations
| Legacy | OpenTelemetry |
|---------------------------------------------------------------|-----------------------------------------------------------------|
| `nginx.ingress.kubernetes.io/enable-opentracing` | `nginx.ingress.kubernetes.io/enable-opentelemetry` |
| `nginx.ingress.kubernetes.io/opentracing-trust-incoming-span` | `nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span` |
### Configs
| Legacy | OpenTelemetry |
|---------------------------------------|----------------------------------------------|
| `opentracing-operation-name` | `opentelemetry-operation-name` |
| `opentracing-location-operation-name` | `opentelemetry-operation-name` |
| `opentracing-trust-incoming-span` | `opentelemetry-trust-incoming-span` |
| `zipkin-collector-port` | `otlp-collector-port` |
| `zipkin-service-name` | `otel-service-name` |
| `zipkin-sample-rate` | `otel-sampler-ratio` |
| `jaeger-collector-port` | `otlp-collector-port` |
| `jaeger-endpoint` | `otlp-collector-port`, `otlp-collector-host` |
| `jaeger-service-name` | `otel-service-name` |
| `jaeger-propagation-format` | `N/A` |
| `jaeger-sampler-type` | `otel-sampler` |
| `jaeger-sampler-param` | `otel-sampler` |
| `jaeger-sampler-host` | `N/A` |
| `jaeger-sampler-port` | `N/A` |
| `jaeger-trace-context-header-name` | `N/A` |
| `jaeger-debug-header` | `N/A` |
| `jaeger-baggage-header` | `N/A` |
| `jaeger-tracer-baggage-header-prefix` | `N/A` |
| `datadog-collector-port` | `otlp-collector-port` |
| `datadog-service-name` | `otel-service-name` |
| `datadog-environment` | `N/A` |
| `datadog-operation-name-override` | `N/A` |
| `datadog-priority-sampling` | `otel-sampler` |
| `datadog-sample-rate` | `otel-sampler-ratio` | | ingress nginx | OpenTelemetry Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project Using the third party module opentelemetry cpp contrib nginx https github com open telemetry opentelemetry cpp contrib tree main instrumentation nginx the Ingress Nginx Controller can configure NGINX to enable OpenTelemetry http opentelemetry io instrumentation By default this feature is disabled Check out this demo showcasing OpenTelemetry in Ingress NGINX The video provides an overview and practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability and monitoring purposes p align center a href https www youtube com watch v jpBfgJpTcfw t 129 target blank rel noopener noreferrer img src https img youtube com vi jpBfgJpTcfw 0 jpg alt Video Thumbnail a p p align center Demo OpenTelemetry in Ingress NGINX p Usage To enable the instrumentation we must enable OpenTelemetry in the configuration ConfigMap yaml data enable opentelemetry true To enable or disable instrumentation for a single Ingress use the enable opentelemetry annotation yaml kind Ingress metadata annotations nginx ingress kubernetes io enable opentelemetry true We must also set the host to use when uploading traces yaml otlp collector host otel coll collector otel svc NOTE While the option is called otlp collector host you will need to point this to any backend that receives otlp grpc Next you will need to deploy a distributed telemetry system which uses OpenTelemetry opentelemetry collector https github com open telemetry opentelemetry collector Jaeger https www jaegertracing io Tempo https github com grafana tempo and zipkin https zipkin io have been tested Other optional configuration options yaml specifies the name to use for the server span opentelemetry operation name sets whether or not to trust incoming telemetry spans opentelemetry trust incoming span specifies the port to use when uploading traces Default 4317 otlp collector port specifies the service name to use for any traces created Default nginx otel service name The maximum queue size After the size is reached data are dropped otel max queuesize The delay interval in milliseconds between two consecutive exports otel schedule delay millis How long the export can run before it is cancelled otel schedule delay millis The maximum batch size of every export It must be smaller or equal to maxQueueSize otel max export batch size specifies sample rate for any traces created Default 0 01 otel sampler ratio specifies the sampler to be used when sampling traces The available samplers are AlwaysOn AlwaysOff TraceIdRatioBased Default AlwaysOff otel sampler Uses sampler implementation which by default will take a sample if parent Activity is sampled Default false otel sampler parent based Note that you can also set whether to trust incoming spans global default is true per location using annotations like the following yaml kind Ingress metadata annotations nginx ingress kubernetes io opentelemetry trust incoming span true Examples The following examples show how to deploy and test different distributed telemetry systems These example can be performed using Docker Desktop In the esigo nginx example https github com esigo nginx example GitHub repository is an example of a simple hello service mermaid graph TB subgraph Browser start http esigo dev hello nginx end subgraph app sa service a sb service b sa name nginx sb sb hello nginx sa end subgraph otel otc Otel Collector end subgraph observability tempo Tempo grafana Grafana backend Jaeger zipkin Zipkin end subgraph ingress nginx ngx nginx end subgraph ngx nginx ng nginx om OpenTelemetry module end subgraph Node app otel observability ingress nginx om otlp gRPC otc jaeger backend otc zipkin zipkin otc otlp gRPC tempo grafana sa otlp gRPC otc sb otlp gRPC otc start ng sa end To install the example and collectors run 1 Enable OpenTelemetry and set the otlp collector host yaml echo apiVersion v1 kind ConfigMap data enable opentelemetry true opentelemetry config etc nginx opentelemetry toml opentelemetry operation name HTTP request method service name uri opentelemetry trust incoming span true otlp collector host otel coll collector otel svc otlp collector port 4317 otel max queuesize 2048 otel schedule delay millis 5000 otel max export batch size 512 otel service name nginx proxy Opentelemetry resource name otel sampler AlwaysOn Also AlwaysOff TraceIdRatioBased otel sampler ratio 1 0 otel sampler parent based false metadata name ingress nginx controller namespace ingress nginx kubectl replace f 2 Deploy otel collector grafana and Jaeger backend bash add helm charts needed for grafana and OpenTelemetry collector helm repo add open telemetry https open telemetry github io opentelemetry helm charts helm repo add grafana https grafana github io helm charts helm repo update deploy cert manager needed for OpenTelemetry collector operator kubectl apply f https github com cert manager cert manager releases download v1 15 3 cert manager yaml create observability namespace kubectl apply f https raw githubusercontent com esigo nginx example main observability namespace yaml install OpenTelemetry collector operator helm upgrade install otel collector operator n otel create namespace open telemetry opentelemetry operator deploy OpenTelemetry collector kubectl apply f https raw githubusercontent com esigo nginx example main observability collector yaml deploy Jaeger all in one kubectl apply f https github com jaegertracing jaeger operator releases download v1 37 0 jaeger operator yaml n observability kubectl apply f https raw githubusercontent com esigo nginx example main observability jaeger yaml n observability deploy zipkin kubectl apply f https raw githubusercontent com esigo nginx example main observability zipkin yaml n observability deploy tempo and grafana helm upgrade install tempo grafana tempo create namespace n observability helm upgrade f https raw githubusercontent com esigo nginx example main observability grafana grafana values yaml install grafana grafana grafana create namespace n observability 3 Build and deploy demo app bash build images make images deploy demo app make deploy app 4 Make a few requests to the Service bash kubectl port forward namespace ingress nginx service ingress nginx controller 8090 80 curl http esigo dev 8090 hello nginx StatusCode 200 StatusDescription OK Content v hello nginx RawContent HTTP 1 1 200 OK Connection keep alive Content Length 21 Content Type text plain charset utf 8 Date Mon 10 Oct 2022 17 43 33 GMT v hello nginx Forms Headers Connection keep alive Content Length 21 Content Type text plain charset utf 8 Date Mon 10 Oct 2022 17 43 33 GMT Images InputFields Links ParsedHtml System ComObject RawContentLength 21 5 View the Grafana UI bash kubectl port forward namespace observability service grafana 3000 80 In the Grafana interface we can see the details grafana screenshot images otel grafana demo png grafana screenshot 6 View the Jaeger UI bash kubectl port forward namespace observability service jaeger all in one query 16686 16686 In the Jaeger interface we can see the details Jaeger screenshot images otel jaeger demo png Jaeger screenshot 7 View the Zipkin UI bash kubectl port forward namespace observability service zipkin 9411 9411 In the Zipkin interface we can see the details zipkin screenshot images otel zipkin demo png zipkin screenshot Migration from OpenTracing Jaeger Zipkin and Datadog If you are migrating from OpenTracing Jaeger Zipkin or Datadog to OpenTelemetry you may need to update various annotations and configurations Here are the mappings for common annotations and configurations Annotations Legacy OpenTelemetry nginx ingress kubernetes io enable opentracing nginx ingress kubernetes io enable opentelemetry nginx ingress kubernetes io opentracing trust incoming span nginx ingress kubernetes io opentelemetry trust incoming span Configs Legacy OpenTelemetry opentracing operation name opentelemetry operation name opentracing location operation name opentelemetry operation name opentracing trust incoming span opentelemetry trust incoming span zipkin collector port otlp collector port zipkin service name otel service name zipkin sample rate otel sampler ratio jaeger collector port otlp collector port jaeger endpoint otlp collector port otlp collector host jaeger service name otel service name jaeger propagation format N A jaeger sampler type otel sampler jaeger sampler param otel sampler jaeger sampler host N A jaeger sampler port N A jaeger trace context header name N A jaeger debug header N A jaeger baggage header N A jaeger tracer baggage header prefix N A datadog collector port otlp collector port datadog service name otel service name datadog environment N A datadog operation name override N A datadog priority sampling otel sampler datadog sample rate otel sampler ratio |
ingress nginx overlapping in some points Hardening Guide Overview Do not use in multi tenant Kubernetes production installations This project assumes that users that can create Ingress objects are administrators of the cluster There are several ways to do hardening and securing of nginx In this documentation two guides are used the guides are |
# Hardening Guide
Do not use in multi-tenant Kubernetes production installations. This project assumes that users that can create Ingress objects are administrators of the cluster.
## Overview
There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are
overlapping in some points:
- [nginx CIS Benchmark](https://www.cisecurity.org/benchmark/nginx/)
- [cipherlist.eu](https://cipherlist.eu/) (one of many forks of the now dead project cipherli.st)
This guide describes, what of the different configurations described in those guides is already implemented as default
in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that
the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult
or not possible.
Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may
lead to have specific clients unable to reach your site or similar consequences.
This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself
## Configuration Guide
| Chapter in CIS benchmark | Status | Default | Action to do if not default|
|:-------------------------|:-------|:--------|:---------------------------|
| __1 Initial Setup__ ||| |
| ||| |
| __1.1 Installation__||| |
| 1.1.1 Ensure NGINX is installed (Scored)| OK | done through helm charts / following documentation to deploy nginx ingress | |
| 1.1.2 Ensure NGINX is installed from source (Not Scored)| OK | done through helm charts / following documentation to deploy nginx ingress | |
| ||| |
| __1.2 Configure Software Updates__||| |
| 1.2.1 Ensure package manager repositories are properly configured (Not Scored) | OK | done via helm, nginx version could be overwritten, however compatibility is not ensured then| |
| 1.2.2 Ensure the latest software package is installed (Not Scored)| ACTION NEEDED | done via helm, nginx version could be overwritten, however compatibility is not ensured then| Plan for periodic updates |
| ||| |
| __2 Basic Configuration__ ||| |
| ||| |
| __2.1 Minimize NGINX Modules__||| |
| 2.1.1 Ensure only required modules are installed (Not Scored) | OK | Already only needed modules are installed, however proposals for further reduction are welcome | |
| 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) | OK | | |
| 2.1.3 Ensure modules with gzip functionality are disabled (Scored)| OK | | |
| 2.1.4 Ensure the autoindex module is disabled (Scored)| OK | No autoindex configs so far in ingress defaults| |
| ||| |
| __2.2 Account Security__||| |
| 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) | OK | Pod configured as user www-data: [See this line in helm chart values](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L10). Compiled with user www-data: [See this line in build script](https://github.com/kubernetes/ingress-nginx/blob/5d67794f4fbf38ec6575476de46201b068eabf87/images/nginx/rootfs/build.sh#L529) | |
| 2.2.2 Ensure the NGINX service account is locked (Scored) | OK | Docker design ensures this | |
| 2.2.3 Ensure the NGINX service account has an invalid shell (Scored)| OK | Shell is nologin: [see this line in build script](https://github.com/kubernetes/ingress-nginx/blob/5d67794f4fbf38ec6575476de46201b068eabf87/images/nginx/rootfs/build.sh#L613)| |
| ||| |
| __2.3 Permissions and Ownership__ ||| |
| 2.3.1 Ensure NGINX directories and files are owned by root (Scored) | OK | Obsolete through docker-design and ingress controller needs to update the configs dynamically| |
| 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) | OK | See previous answer| |
| 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored)| OK | No PID-File due to docker design | |
| 2.3.4 Ensure the core dump directory is secured (Not Scored)| OK | No working_directory configured by default | |
| ||| |
| __2.4 Network Configuration__ ||| |
| 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored)| OK | Ensured by automatic nginx.conf configuration| |
| 2.4.2 Ensure requests for unknown host names are rejected (Not Scored)| OK | They are not rejected but send to the "default backend" delivering appropriate errors (mostly 404)| |
| 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored)| ACTION NEEDED| Default is 75s | configure keep-alive to 10 seconds [according to this documentation](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#keep-alive) |
| 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored)| RISK TO BE ACCEPTED| Not configured, however the nginx default is 60s| Not configurable|
| ||| |
| __2.5 Information Disclosure__||| |
| 2.5.1 Ensure server_tokens directive is set to `off` (Scored) | OK | server_tokens is configured to off by default| |
| 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) | ACTION NEEDED| 404 shows no version at all, 503 and 403 show "nginx", which is hardcoded [see this line in nginx source code](https://github.com/nginx/nginx/blob/master/src/http/ngx_http_special_response.c#L36) | configure custom error pages at least for 403, 404 and 503 and 500|
| 2.5.3 Ensure hidden file serving is disabled (Not Scored) | ACTION NEEDED | config not set | configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please |
| 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored)| ACTION NEEDED| hide not configured| configure hide-headers with array of "X-Powered-By" and "Server": [according to this documentation](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#hide-headers) |
| ||| |
| __3 Logging__ ||| |
| ||| |
| 3.1 Ensure detailed logging is enabled (Not Scored) | OK | nginx ingress has a very detailed log format by default | |
| 3.2 Ensure access logging is enabled (Scored) | OK | Access log is enabled by default | |
| 3.3 Ensure error logging is enabled and set to the info logging level (Scored)| OK | Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway | |
| 3.4 Ensure log files are rotated (Scored) | OBSOLETE | Log file handling is not part of the nginx ingress and should be handled separately | |
| 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) | OBSOLETE | See previous answer| |
| 3.6 Ensure access logs are sent to a remote syslog server (Not Scored)| OBSOLETE | See previous answer| |
| 3.7 Ensure proxies pass source IP information (Scored)| OK | Headers are set by default | |
| ||| |
| __4 Encryption__ ||| |
| ||| |
| __4.1 TLS / SSL Configuration__ ||| |
| 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) | OK | Redirect to TLS is default | |
| 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored)| ACTION NEEDED| For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager | Install proper certificates or use lets encrypt with cert-manager |
| 4.1.3 Ensure private key permissions are restricted (Scored)| ACTION NEEDED| See previous answer| |
| 4.1.4 Ensure only modern TLS protocols are used (Scored)| OK/ACTION NEEDED | Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's | Set controller.config.ssl-protocols to "TLSv1.3"|
| 4.1.5 Disable weak ciphers (Scored) | ACTION NEEDED| Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers | Set controller.config.ssl-ciphers to "EECDH+AESGCM:EDH+AESGCM"|
| 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) | ACTION NEEDED| No custom DH parameters are generated| Generate dh parameters for each ingress deployment you use - [see here for a how to](https://kubernetes.github.io/ingress-nginx/examples/customization/ssl-dh-param/) |
| 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) | ACTION NEEDED | Not enabled | set via [this configuration parameter](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#enable-ocsp) |
| 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored)| OK | HSTS is enabled by default | |
| 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored)| ACTION NEEDED / RISK TO BE ACCEPTED | HKPK not enabled by default | If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown |
| 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) | DEPENDS ON BACKEND | Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh| If backend allows it, [manual is here](https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/)|
| 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) | DEPENDS ON BACKEND | Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh| If backend allows it, [see configuration here](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#backend-certificate-authentication) |
| 4.1.12 Ensure your domain is preloaded (Not Scored) | ACTION NEEDED| Preload is not active by default | Set controller.config.hsts-preload to true|
| 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored)| OK | Session tickets are disabled by default | |
| 4.1.14 Ensure HTTP/2.0 is used (Not Scored) | OK | http2 is set by default| |
| ||| |
| __5 Request Filtering and Restrictions__||| |
| ||| |
| __5.1 Access Control__||| |
| 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored)| OK/ACTION NEEDED | Depends on use case, geo ip module is compiled into Ingress-Nginx Controller, there are several ways to use it | If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) |
| 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) | OK/ACTION NEEDED | Depends on use case| If required it can be set via config snippet|
| ||| |
| __5.2 Request Limits__||| |
| 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) | ACTION NEEDED| Default timeout is 60s | Set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#client-header-timeout) and respective body equivalent|
| 5.2.2 Ensure the maximum request body size is set correctly (Scored)| ACTION NEEDED| Default is 1m| set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#proxy-body-size)|
| 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) | ACTION NEEDED| Default is 4 8k| Set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#large-client-header-buffers)|
| 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) | OK/ACTION NEEDED| No limit set| Depends on use case, limit can be set via [these annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting)|
| 5.2.5 Ensure rate limits by IP address are set (Not Scored) | OK/ACTION NEEDED| No limit set| Depends on use case, limit can be set via [these annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting)|
| ||| |
| __5.3 Browser Security__||| |
| 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored)| ACTION NEEDED| Header not set by default| Several ways to implement this - with the helm charts it works via controller.add-headers |
| 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) | ACTION NEEDED| See previous answer| See previous answer |
| 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored)| ACTION NEEDED| See previous answer| See previous answer |
| 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) | ACTION NEEDED| See previous answer| See previous answer |
| 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored)| ACTION NEEDED | Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress | check backend webserver |
| ||| |
| __6 Mandatory Access Control__| n/a| too high level, depends on backends | |
<style type="text/css" rel="stylesheet">
@media only screen and (min-width: 768px) {
td:nth-child(1){
white-space:normal !important;
}
.md-typeset table:not([class]) td {
padding: .2rem .3rem;
}
}
</style> | ingress nginx | Hardening Guide Do not use in multi tenant Kubernetes production installations This project assumes that users that can create Ingress objects are administrators of the cluster Overview There are several ways to do hardening and securing of nginx In this documentation two guides are used the guides are overlapping in some points nginx CIS Benchmark https www cisecurity org benchmark nginx cipherlist eu https cipherlist eu one of many forks of the now dead project cipherli st This guide describes what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress what needs to be configured what is obsolete due to the fact that the nginx is running as container the CIS benchmark relates to a non containerized installation and what is difficult or not possible Be aware that this is only a guide and you are responsible for your own implementation Some of the configurations may lead to have specific clients unable to reach your site or similar consequences This guide refers to chapters in the CIS Benchmark For full explanation you should refer to the benchmark document itself Configuration Guide Chapter in CIS benchmark Status Default Action to do if not default 1 Initial Setup 1 1 Installation 1 1 1 Ensure NGINX is installed Scored OK done through helm charts following documentation to deploy nginx ingress 1 1 2 Ensure NGINX is installed from source Not Scored OK done through helm charts following documentation to deploy nginx ingress 1 2 Configure Software Updates 1 2 1 Ensure package manager repositories are properly configured Not Scored OK done via helm nginx version could be overwritten however compatibility is not ensured then 1 2 2 Ensure the latest software package is installed Not Scored ACTION NEEDED done via helm nginx version could be overwritten however compatibility is not ensured then Plan for periodic updates 2 Basic Configuration 2 1 Minimize NGINX Modules 2 1 1 Ensure only required modules are installed Not Scored OK Already only needed modules are installed however proposals for further reduction are welcome 2 1 2 Ensure HTTP WebDAV module is not installed Scored OK 2 1 3 Ensure modules with gzip functionality are disabled Scored OK 2 1 4 Ensure the autoindex module is disabled Scored OK No autoindex configs so far in ingress defaults 2 2 Account Security 2 2 1 Ensure that NGINX is run using a non privileged dedicated service account Not Scored OK Pod configured as user www data See this line in helm chart values https github com kubernetes ingress nginx blob 0cbe783f43a9313c9c26136e888324b1ee91a72f charts ingress nginx values yaml L10 Compiled with user www data See this line in build script https github com kubernetes ingress nginx blob 5d67794f4fbf38ec6575476de46201b068eabf87 images nginx rootfs build sh L529 2 2 2 Ensure the NGINX service account is locked Scored OK Docker design ensures this 2 2 3 Ensure the NGINX service account has an invalid shell Scored OK Shell is nologin see this line in build script https github com kubernetes ingress nginx blob 5d67794f4fbf38ec6575476de46201b068eabf87 images nginx rootfs build sh L613 2 3 Permissions and Ownership 2 3 1 Ensure NGINX directories and files are owned by root Scored OK Obsolete through docker design and ingress controller needs to update the configs dynamically 2 3 2 Ensure access to NGINX directories and files is restricted Scored OK See previous answer 2 3 3 Ensure the NGINX process ID PID file is secured Scored OK No PID File due to docker design 2 3 4 Ensure the core dump directory is secured Not Scored OK No working directory configured by default 2 4 Network Configuration 2 4 1 Ensure NGINX only listens for network connections on authorized ports Not Scored OK Ensured by automatic nginx conf configuration 2 4 2 Ensure requests for unknown host names are rejected Not Scored OK They are not rejected but send to the default backend delivering appropriate errors mostly 404 2 4 3 Ensure keepalive timeout is 10 seconds or less but not 0 Scored ACTION NEEDED Default is 75s configure keep alive to 10 seconds according to this documentation https github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md keep alive 2 4 4 Ensure send timeout is set to 10 seconds or less but not 0 Scored RISK TO BE ACCEPTED Not configured however the nginx default is 60s Not configurable 2 5 Information Disclosure 2 5 1 Ensure server tokens directive is set to off Scored OK server tokens is configured to off by default 2 5 2 Ensure default error and index html pages do not reference NGINX Scored ACTION NEEDED 404 shows no version at all 503 and 403 show nginx which is hardcoded see this line in nginx source code https github com nginx nginx blob master src http ngx http special response c L36 configure custom error pages at least for 403 404 and 503 and 500 2 5 3 Ensure hidden file serving is disabled Not Scored ACTION NEEDED config not set configure a config server snippet Snippet but beware of well known challenges or similar Refer to the benchmark here please 2 5 4 Ensure the NGINX reverse proxy does not enable information disclosure Scored ACTION NEEDED hide not configured configure hide headers with array of X Powered By and Server according to this documentation https github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md hide headers 3 Logging 3 1 Ensure detailed logging is enabled Not Scored OK nginx ingress has a very detailed log format by default 3 2 Ensure access logging is enabled Scored OK Access log is enabled by default 3 3 Ensure error logging is enabled and set to the info logging level Scored OK Error log is configured by default The log level does not matter because it is all sent to STDOUT anyway 3 4 Ensure log files are rotated Scored OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately 3 5 Ensure error logs are sent to a remote syslog server Not Scored OBSOLETE See previous answer 3 6 Ensure access logs are sent to a remote syslog server Not Scored OBSOLETE See previous answer 3 7 Ensure proxies pass source IP information Scored OK Headers are set by default 4 Encryption 4 1 TLS SSL Configuration 4 1 1 Ensure HTTP is redirected to HTTPS Scored OK Redirect to TLS is default 4 1 2 Ensure a trusted certificate and trust chain is installed Not Scored ACTION NEEDED For installing certs there are enough manuals in the web A good way is to use lets encrypt through cert manager Install proper certificates or use lets encrypt with cert manager 4 1 3 Ensure private key permissions are restricted Scored ACTION NEEDED See previous answer 4 1 4 Ensure only modern TLS protocols are used Scored OK ACTION NEEDED Default is TLS 1 2 1 3 while this is okay for CIS Benchmark cipherlist eu only recommends 1 3 This may cut off old OS s Set controller config ssl protocols to TLSv1 3 4 1 5 Disable weak ciphers Scored ACTION NEEDED Default ciphers are already good but cipherlist eu recommends even stronger ciphers Set controller config ssl ciphers to EECDH AESGCM EDH AESGCM 4 1 6 Ensure custom Diffie Hellman parameters are used Scored ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use see here for a how to https kubernetes github io ingress nginx examples customization ssl dh param 4 1 7 Ensure Online Certificate Status Protocol OCSP stapling is enabled Scored ACTION NEEDED Not enabled set via this configuration parameter https kubernetes github io ingress nginx user guide nginx configuration configmap enable ocsp 4 1 8 Ensure HTTP Strict Transport Security HSTS is enabled Scored OK HSTS is enabled by default 4 1 9 Ensure HTTP Public Key Pinning is enabled Not Scored ACTION NEEDED RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used set correct HPKP header There are several ways to implement this with the helm charts it works via controller add headers If lets encrypt is used this is complicated a solution here is yet unknown 4 1 10 Ensure upstream server traffic is authenticated with a client certificate Scored DEPENDS ON BACKEND Highly dependent on backends not every backend allows configuring this can also be mitigated via a service mesh If backend allows it manual is here https kubernetes github io ingress nginx examples auth client certs 4 1 11 Ensure the upstream traffic server certificate is trusted Not Scored DEPENDS ON BACKEND Highly dependent on backends not every backend allows configuring this can also be mitigated via a service mesh If backend allows it see configuration here https github com kubernetes ingress nginx blob main docs user guide nginx configuration annotations md backend certificate authentication 4 1 12 Ensure your domain is preloaded Not Scored ACTION NEEDED Preload is not active by default Set controller config hsts preload to true 4 1 13 Ensure session resumption is disabled to enable perfect forward security Scored OK Session tickets are disabled by default 4 1 14 Ensure HTTP 2 0 is used Not Scored OK http2 is set by default 5 Request Filtering and Restrictions 5 1 Access Control 5 1 1 Ensure allow and deny filters limit access to specific IP addresses Not Scored OK ACTION NEEDED Depends on use case geo ip module is compiled into Ingress Nginx Controller there are several ways to use it If needed set IP restrictions via annotations or work with config snippets be careful with lets encrypt http challenge 5 1 2 Ensure only whitelisted HTTP methods are allowed Not Scored OK ACTION NEEDED Depends on use case If required it can be set via config snippet 5 2 Request Limits 5 2 1 Ensure timeout values for reading the client header and body are set correctly Scored ACTION NEEDED Default timeout is 60s Set via this configuration parameter https github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md client header timeout and respective body equivalent 5 2 2 Ensure the maximum request body size is set correctly Scored ACTION NEEDED Default is 1m set via this configuration parameter https github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md proxy body size 5 2 3 Ensure the maximum buffer size for URIs is defined Scored ACTION NEEDED Default is 4 8k Set via this configuration parameter https github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md large client header buffers 5 2 4 Ensure the number of connections per IP address is limited Not Scored OK ACTION NEEDED No limit set Depends on use case limit can be set via these annotations https kubernetes github io ingress nginx user guide nginx configuration annotations rate limiting 5 2 5 Ensure rate limits by IP address are set Not Scored OK ACTION NEEDED No limit set Depends on use case limit can be set via these annotations https kubernetes github io ingress nginx user guide nginx configuration annotations rate limiting 5 3 Browser Security 5 3 1 Ensure X Frame Options header is configured and enabled Scored ACTION NEEDED Header not set by default Several ways to implement this with the helm charts it works via controller add headers 5 3 2 Ensure X Content Type Options header is configured and enabled Scored ACTION NEEDED See previous answer See previous answer 5 3 3 Ensure the X XSS Protection Header is enabled and configured properly Scored ACTION NEEDED See previous answer See previous answer 5 3 4 Ensure that Content Security Policy CSP is enabled and configured properly Not Scored ACTION NEEDED See previous answer See previous answer 5 3 5 Ensure the Referrer Policy is enabled and configured properly Not Scored ACTION NEEDED Depends on application It should be handled in the applications webserver itself not in the load balancing ingress check backend webserver 6 Mandatory Access Control n a too high level depends on backends style type text css rel stylesheet media only screen and min width 768px td nth child 1 white space normal important md typeset table not class td padding 2rem 3rem style |
ingress nginx with specific addons e g for or There are multiple ways to install the Ingress Nginx Controller get started as fast as possible you can check the instructions However in many Installation Guide On most Kubernetes clusters the ingress controller will work without requiring any extra configuration If you want to with using the project repository chart with using YAML manifests | # Installation Guide
There are multiple ways to install the Ingress-Nginx Controller:
- with [Helm](https://helm.sh), using the project repository chart;
- with `kubectl apply`, using YAML manifests;
- with specific addons (e.g. for [minikube](#minikube) or [MicroK8s](#microk8s)).
On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to
get started as fast as possible, you can check the [quick start](#quick-start) instructions. However, in many
environments, you can improve the performance or get better logs by enabling extra features. We recommend that you
check the [environment-specific instructions](#environment-specific-instructions) for details about optimizing the
ingress controller for your particular environment or cloud provider.
## Contents
<!-- Quick tip: run `grep '^##' index.md` to check that the table of contents is up-to-date. -->
- [Quick start](#quick-start)
- [Environment-specific instructions](#environment-specific-instructions)
- ... [Docker Desktop](#docker-desktop)
- ... [Rancher Desktop](#rancher-desktop)
- ... [minikube](#minikube)
- ... [MicroK8s](#microk8s)
- ... [AWS](#aws)
- ... [GCE - GKE](#gce-gke)
- ... [Azure](#azure)
- ... [Digital Ocean](#digital-ocean)
- ... [Scaleway](#scaleway)
- ... [Exoscale](#exoscale)
- ... [Oracle Cloud Infrastructure](#oracle-cloud-infrastructure)
- ... [OVHcloud](#ovhcloud)
- ... [Bare-metal](#bare-metal-clusters)
- [Miscellaneous](#miscellaneous)
<!-- TODO: We have subdirectories for kubernetes versions now because of a PR
https://github.com/kubernetes/ingress-nginx/pull/8162 . You can see this here
https://github.com/kubernetes/ingress-nginx/tree/main/deploy/static/provider/cloud .
We need to add documentation here that is clear and unambiguous in guiding users to pick the deployment manifest
under a subdirectory, based on the K8S version being used. But until the explicit clear docs land here, users are
free to use those subdirectories and get the manifest(s) related to their K8S version. -->
## Quick start
**If you have Helm,** you can deploy the ingress controller with the following command:
```console
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
```
It will install the controller in the `ingress-nginx` namespace, creating that namespace if it doesn't already exist.
!!! info
This command is *idempotent*:
- if the ingress controller is not installed, it will install it,
- if the ingress controller is already installed, it will upgrade it.
**If you want a full list of values that you can set, while installing with Helm,** then run:
```console
helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx
```
!!! attention "Helm install on AWS/GCP/Azure/Other providers"
The *ingress-nginx-controller helm-chart is a generic install out of the box*. The default set of helm values is **not** configured for installation on any infra provider. The annotations that are applicable to the cloud provider must be customized by the users.<br/>
See [AWS LB Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/).<br/>
Examples of some annotations recommended (healthecheck ones are required for target-type IP) for the service resource of `--type LoadBalancer` on AWS are below:
```yaml
annotations:
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=270
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "10254"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http
service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes: 200-299
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "true"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-something1 sg-something2"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "somebucket"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "ingress-nginx"
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "5"
```
**If you don't have Helm** or if you prefer to use a YAML manifest, you can run the following command instead:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/cloud/deploy.yaml
```
!!! info
The YAML manifest in the command above was generated with `helm template`, so you will end up with almost the same
resources as if you had used Helm to install the controller.
!!! attention
If you are running an old version of Kubernetes (1.18 or earlier), please read [this paragraph](#running-on-Kubernetes-versions-older-than-1.19) for specific instructions.
Because of api deprecations, the default manifest may not work on your cluster.
Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.
### Firewall configuration
To check which ports are used by your installation of ingress-nginx, look at the output of `kubectl -n ingress-nginx get pod -o yaml`. In general, you need:
- Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx [admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/).
- Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing.
### Pre-flight check
A few pods should start in the `ingress-nginx` namespace:
```console
kubectl get pods --namespace=ingress-nginx
```
After a while, they should all be running. The following command will wait for the ingress controller pod to be up,
running, and ready:
```console
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
```
### Local testing
Let's create a simple web server and the associated service:
```console
kubectl create deployment demo --image=httpd --port=80
kubectl expose deployment demo
```
Then create an ingress resource. The following example uses a host that maps to `localhost`:
```console
kubectl create ingress demo-localhost --class=nginx \
--rule="demo.localdev.me/*=demo:80"
```
Now, forward a local port to the ingress controller:
```console
kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
```
!!! info
A note on DNS & network-connection.
This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress.
The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The "kubectl port-forward..." command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service.
Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster.
[This issue](https://github.com/kubernetes/ingress-nginx/issues/10014#issuecomment-1567791549described) shows a typical DNS problem and its solution.
At this point, you can access your deployment using curl ;
```console
curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080
```
You should see a HTML response containing text like **"It works!"**.
### Online testing
If your Kubernetes cluster is a "real" cluster that supports services of type `LoadBalancer`, it will have allocated an
external IP address or FQDN to the ingress controller.
You can see that IP address or FQDN with the following command:
```console
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
```
It will be the `EXTERNAL-IP` field. If that field shows `<pending>`, this means that your Kubernetes cluster wasn't
able to provision the load balancer (generally, this is because it doesn't support services of type `LoadBalancer`).
Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress
resource. The following example assumes that you have set up a DNS record for `www.demo.io`:
```console
kubectl create ingress demo --class=nginx \
--rule="www.demo.io/*=demo:80"
```
Alternatively, the above command can be rewritten as follows for the ```--rule``` command and below.
```console
kubectl create ingress demo --class=nginx \
--rule www.demo.io/=demo:80
```
You should then be able to see the "It works!" page when you connect to <http://www.demo.io/>. Congratulations,
you are serving a public website hosted on a Kubernetes cluster! 🎉
## Environment-specific instructions
### Local development clusters
#### minikube
The ingress controller can be installed through minikube's addons system:
```console
minikube addons enable ingress
```
#### MicroK8s
The ingress controller can be installed through MicroK8s's addons system:
```console
microk8s enable ingress
```
Please check the MicroK8s [documentation page](https://microk8s.io/docs/addon-ingress) for details.
#### Docker Desktop
Kubernetes is available in Docker Desktop:
- Mac, from [version 18.06.0-ce](https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018)
- Windows, from [version 18.06.0-ce](https://docs.docker.com/docker-for-windows/release-notes/#docker-community-edition-18060-ce-win70-2018-07-25)
First, make sure that Kubernetes is enabled in the Docker settings. The command `kubectl get nodes` should show a
single node called `docker-desktop`.
The ingress controller can be installed on Docker Desktop using the default [quick start](#quick-start) instructions.
On most systems, if you don't have any other service of type `LoadBalancer` bound to port 80, the ingress controller
will be assigned the `EXTERNAL-IP` of `localhost`, which means that it will be reachable on localhost:80. If that
doesn't work, you might have to fall back to the `kubectl port-forward` method described in the
[local testing section](#local-testing).
#### Rancher Desktop
Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.
Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.
Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default [quick start](#quick-start) instructions. Follow the instructions described in the [local testing section](#local-testing) to try a sample.
### Cloud deployments
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the
`externalTrafficPolicy` of the ingress controller Service to `Local` (instead of the default `Cluster`) to save an
extra hop in some cases. If you're installing with Helm, this can be done by adding
`--set controller.service.externalTrafficPolicy=Local` to the `helm install` or `helm upgrade` command.
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will
let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of
the upstream load balancer. This must be done both in the ingress controller
(with e.g. `--set controller.config.use-proxy-protocol=true`) and in the cloud provider's load balancer configuration
to function correctly.
In the following sections, we provide YAML manifests that enable these options when possible, using the specific
options of various cloud providers.
#### AWS
In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of `Type=LoadBalancer`.
!!! info
The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB.
AWS provides the documentation on how to use
[Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html)
with [AWS Load Balancer Controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller).
##### Network Load Balancer (NLB)
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/aws/deploy.yaml
```
##### TLS termination in AWS Load Balancer (NLB)
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer.
This section explains how to do that on AWS using an NLB.
1. Download the [deploy.yaml](https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml) template
```console
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
```
2. Edit the file and change the VPC CIDR in use for the Kubernetes cluster:
```
proxy-real-ip-cidr: XXX.XXX.XXX/XX
```
3. Change the AWS Certificate Manager (ACM) ID as well:
```
arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX
```
4. Deploy the manifest:
```console
kubectl apply -f deploy.yaml
```
##### NLB Idle Timeouts
Idle timeout value for TCP flows is 350 seconds and
[cannot be modified](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout).
For this reason, you need to ensure the
[keepalive_timeout](https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout)
value is configured less than 350 seconds to work as expected.
By default, NGINX `keepalive_timeout` is set to `75s`.
More information with regard to timeouts can be found in the
[official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout)
#### GCE-GKE
First, your user needs to have `cluster-admin` permissions on the cluster. This can be done with the following command:
```console
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
```
Then, the ingress controller can be installed like this:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/cloud/deploy.yaml
```
!!! warning
For private clusters, you will need to either add a firewall rule that allows master nodes access to
port `8443/tcp` on worker nodes, or change the existing rule that allows access to port `80/tcp`, `443/tcp` and
`10254/tcp` to also allow access to port `8443/tcp`. More information can be found in the
[Official GCP Documentation](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#config-hc-firewall).
See the [GKE documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules)
on adding rules and the [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/79739) for more detail.
Proxy-protocol is supported in GCE check the [Official Documentations on how to enable.](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#proxy-protocol)
#### Azure
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/cloud/deploy.yaml
```
More information with regard to Azure annotations for ingress controller can be found in the [official AKS documentation](https://docs.microsoft.com/en-us/azure/aks/ingress-internal-ip#create-an-ingress-controller).
#### Digital Ocean
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/do/deploy.yaml
```
- By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one `service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"`. While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows `no data`, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in [this issue](https://github.com/kubernetes/ingress-nginx/issues/8965). Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.
#### Scaleway
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/scw/deploy.yaml
```
Refer to the [dedicated tutorial](https://www.scaleway.com/en/docs/tutorials/proxy-protocol-v2-load-balancer/#configuring-proxy-protocol-for-ingress-nginx) in the Scaleway documentation for configuring the proxy protocol for ingress-nginx with the Scaleway load balancer.
#### Exoscale
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
```
The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager
[documentation](https://github.com/exoscale/exoscale-cloud-controller-manager/blob/master/docs/service-loadbalancer.md).
#### Oracle Cloud Infrastructure
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/cloud/deploy.yaml
```
A
[complete list of available annotations for Oracle Cloud Infrastructure](https://github.com/oracle/oci-cloud-controller-manager/blob/master/docs/load-balancer-annotations.md)
can be found in the [OCI Cloud Controller Manager](https://github.com/oracle/oci-cloud-controller-manager) documentation.
#### OVHcloud
```console
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace
```
You can find the [complete tutorial](https://docs.ovh.com/gb/en/kubernetes/installing-nginx-ingress/).
### Bare metal clusters
This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes
was installed manually, using generic Linux distros (like CentOS, Ubuntu...)
For quick testing, you can use a
[NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport).
This should work on almost every cluster, but it will typically use a port in the range 30000-32767.
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/baremetal/deploy.yaml
```
For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range),
see [bare-metal considerations](./baremetal.md).
## Miscellaneous
### Checking ingress controller version
Run `/nginx-ingress-controller --version` within the pod, for instance with `kubectl exec`:
```console
POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)
kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
```
### Scope
By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior,
use the flag `--watch-namespace` or check the Helm chart value `controller.scope` to limit the controller to a single
namespace. Although the use of this flag is not popular, one important fact to note is that the secret containing the default-ssl-certificate needs to also be present in the watched namespace(s).
See also
[“How to easily install multiple instances of the Ingress NGINX controller in the same cluster”](https://kubernetes.github.io/ingress-nginx/#how-to-easily-install-multiple-instances-of-the-ingress-nginx-controller-in-the-same-cluster)
for more details.
### Webhook network access
!!! warning
The controller uses an [admission webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/)
to validate Ingress definitions. Make sure that you don't have
[Network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
or additional firewalls preventing connections from the API server to the `ingress-nginx-controller-admission` service.
### Certificate generation
!!! attention
The first time the ingress controller starts, two [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) create the SSL Certificate used by the admission webhook.
This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.
You can wait until it is ready to run the next command:
```yaml
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
```
### Running on Kubernetes versions older than 1.19
Ingress resources evolved over time. They started with `apiVersion: extensions/v1beta1`,
then moved to `apiVersion: networking.k8s.io/v1beta1` and more recently to `apiVersion: networking.k8s.io/v1`.
Here is how these Ingress versions are supported in Kubernetes:
- before Kubernetes 1.19, only `v1beta1` Ingress resources are supported
- from Kubernetes 1.19 to 1.21, both `v1beta1` and `v1` Ingress resources are supported
- in Kubernetes 1.22 and above, only `v1` Ingress resources are supported
And here is how these Ingress versions are supported in Ingress-Nginx Controller:
- before version 1.0, only `v1beta1` Ingress resources are supported
- in version 1.0 and above, only `v1` Ingress resources are
As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX
Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X
of the Ingress-Nginx Controller (e.g. version 0.49).
The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if
you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding
`--version='<4'` to the `helm install` command ). | ingress nginx | Installation Guide There are multiple ways to install the Ingress Nginx Controller with Helm https helm sh using the project repository chart with kubectl apply using YAML manifests with specific addons e g for minikube minikube or MicroK8s microk8s On most Kubernetes clusters the ingress controller will work without requiring any extra configuration If you want to get started as fast as possible you can check the quick start quick start instructions However in many environments you can improve the performance or get better logs by enabling extra features We recommend that you check the environment specific instructions environment specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider Contents Quick tip run grep index md to check that the table of contents is up to date Quick start quick start Environment specific instructions environment specific instructions Docker Desktop docker desktop Rancher Desktop rancher desktop minikube minikube MicroK8s microk8s AWS aws GCE GKE gce gke Azure azure Digital Ocean digital ocean Scaleway scaleway Exoscale exoscale Oracle Cloud Infrastructure oracle cloud infrastructure OVHcloud ovhcloud Bare metal bare metal clusters Miscellaneous miscellaneous TODO We have subdirectories for kubernetes versions now because of a PR https github com kubernetes ingress nginx pull 8162 You can see this here https github com kubernetes ingress nginx tree main deploy static provider cloud We need to add documentation here that is clear and unambiguous in guiding users to pick the deployment manifest under a subdirectory based on the K8S version being used But until the explicit clear docs land here users are free to use those subdirectories and get the manifest s related to their K8S version Quick start If you have Helm you can deploy the ingress controller with the following command console helm upgrade install ingress nginx ingress nginx repo https kubernetes github io ingress nginx namespace ingress nginx create namespace It will install the controller in the ingress nginx namespace creating that namespace if it doesn t already exist info This command is idempotent if the ingress controller is not installed it will install it if the ingress controller is already installed it will upgrade it If you want a full list of values that you can set while installing with Helm then run console helm show values ingress nginx repo https kubernetes github io ingress nginx attention Helm install on AWS GCP Azure Other providers The ingress nginx controller helm chart is a generic install out of the box The default set of helm values is not configured for installation on any infra provider The annotations that are applicable to the cloud provider must be customized by the users br See AWS LB Controller https kubernetes sigs github io aws load balancer controller v2 2 guide service annotations br Examples of some annotations recommended healthecheck ones are required for target type IP for the service resource of type LoadBalancer on AWS are below yaml annotations service beta kubernetes io aws load balancer target group attributes deregistration delay timeout seconds 270 service beta kubernetes io aws load balancer nlb target type ip service beta kubernetes io aws load balancer healthcheck path healthz service beta kubernetes io aws load balancer healthcheck port 10254 service beta kubernetes io aws load balancer healthcheck protocol http service beta kubernetes io aws load balancer healthcheck success codes 200 299 service beta kubernetes io aws load balancer scheme internet facing service beta kubernetes io aws load balancer backend protocol tcp service beta kubernetes io aws load balancer cross zone load balancing enabled true service beta kubernetes io aws load balancer type nlb service beta kubernetes io aws load balancer manage backend security group rules true service beta kubernetes io aws load balancer access log enabled true service beta kubernetes io aws load balancer security groups sg something1 sg something2 service beta kubernetes io aws load balancer access log s3 bucket name somebucket service beta kubernetes io aws load balancer access log s3 bucket prefix ingress nginx service beta kubernetes io aws load balancer access log emit interval 5 If you don t have Helm or if you prefer to use a YAML manifest you can run the following command instead console kubectl apply f https raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider cloud deploy yaml info The YAML manifest in the command above was generated with helm template so you will end up with almost the same resources as if you had used Helm to install the controller attention If you are running an old version of Kubernetes 1 18 or earlier please read this paragraph running on Kubernetes versions older than 1 19 for specific instructions Because of api deprecations the default manifest may not work on your cluster Specific manifests for supported Kubernetes versions are available within a sub folder of each provider Firewall configuration To check which ports are used by your installation of ingress nginx look at the output of kubectl n ingress nginx get pod o yaml In general you need Port 8443 open between all hosts on which the kubernetes nodes are running This is used for the ingress nginx admission controller https kubernetes io docs reference access authn authz admission controllers Port 80 for HTTP and or 443 for HTTPS open to the public on the kubernetes nodes to which the DNS of your apps are pointing Pre flight check A few pods should start in the ingress nginx namespace console kubectl get pods namespace ingress nginx After a while they should all be running The following command will wait for the ingress controller pod to be up running and ready console kubectl wait namespace ingress nginx for condition ready pod selector app kubernetes io component controller timeout 120s Local testing Let s create a simple web server and the associated service console kubectl create deployment demo image httpd port 80 kubectl expose deployment demo Then create an ingress resource The following example uses a host that maps to localhost console kubectl create ingress demo localhost class nginx rule demo localdev me demo 80 Now forward a local port to the ingress controller console kubectl port forward namespace ingress nginx service ingress nginx controller 8080 80 info A note on DNS network connection This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress The port forwarding mentioned above is the easiest way to demo the working of ingress The kubectl port forward command above has forwarded the port number 8080 on the localhost s tcp ip stack where the command was typed to the port number 80 of the service created by the installation of ingress nginx controller So now the traffic sent to port number 8080 on localhost will reach the port number 80 of the ingress controller s service Port forwarding is not for a production environment use case But here we use port forwarding to simulate a HTTP request originating from outside the cluster to reach the service of the ingress nginx controller that is exposed to receive traffic from outside the cluster This issue https github com kubernetes ingress nginx issues 10014 issuecomment 1567791549described shows a typical DNS problem and its solution At this point you can access your deployment using curl console curl resolve demo localdev me 8080 127 0 0 1 http demo localdev me 8080 You should see a HTML response containing text like It works Online testing If your Kubernetes cluster is a real cluster that supports services of type LoadBalancer it will have allocated an external IP address or FQDN to the ingress controller You can see that IP address or FQDN with the following command console kubectl get service ingress nginx controller namespace ingress nginx It will be the EXTERNAL IP field If that field shows pending this means that your Kubernetes cluster wasn t able to provision the load balancer generally this is because it doesn t support services of type LoadBalancer Once you have the external IP address or FQDN set up a DNS record pointing to it Then you can create an ingress resource The following example assumes that you have set up a DNS record for www demo io console kubectl create ingress demo class nginx rule www demo io demo 80 Alternatively the above command can be rewritten as follows for the rule command and below console kubectl create ingress demo class nginx rule www demo io demo 80 You should then be able to see the It works page when you connect to http www demo io Congratulations you are serving a public website hosted on a Kubernetes cluster Environment specific instructions Local development clusters minikube The ingress controller can be installed through minikube s addons system console minikube addons enable ingress MicroK8s The ingress controller can be installed through MicroK8s s addons system console microk8s enable ingress Please check the MicroK8s documentation page https microk8s io docs addon ingress for details Docker Desktop Kubernetes is available in Docker Desktop Mac from version 18 06 0 ce https docs docker com docker for mac release notes stable releases of 2018 Windows from version 18 06 0 ce https docs docker com docker for windows release notes docker community edition 18060 ce win70 2018 07 25 First make sure that Kubernetes is enabled in the Docker settings The command kubectl get nodes should show a single node called docker desktop The ingress controller can be installed on Docker Desktop using the default quick start quick start instructions On most systems if you don t have any other service of type LoadBalancer bound to port 80 the ingress controller will be assigned the EXTERNAL IP of localhost which means that it will be reachable on localhost 80 If that doesn t work you might have to fall back to the kubectl port forward method described in the local testing section local testing Rancher Desktop Rancher Desktop provides Kubernetes and Container Management on the desktop Kubernetes is enabled by default in Rancher Desktop Rancher Desktop uses K3s under the hood which in turn uses Traefik as the default ingress controller for the Kubernetes cluster To use Ingress Nginx Controller in place of the default Traefik disable Traefik from Preference Kubernetes menu Once traefik is disabled the Ingress Nginx Controller can be installed on Rancher Desktop using the default quick start quick start instructions Follow the instructions described in the local testing section local testing to try a sample Cloud deployments If the load balancers of your cloud provider do active healthchecks on their backends most do you can change the externalTrafficPolicy of the ingress controller Service to Local instead of the default Cluster to save an extra hop in some cases If you re installing with Helm this can be done by adding set controller service externalTrafficPolicy Local to the helm install or helm upgrade command Furthermore if the load balancers of your cloud provider support the PROXY protocol you can enable it and it will let the ingress controller see the real IP address of the clients Otherwise it will generally see the IP address of the upstream load balancer This must be done both in the ingress controller with e g set controller config use proxy protocol true and in the cloud provider s load balancer configuration to function correctly In the following sections we provide YAML manifests that enable these options when possible using the specific options of various cloud providers AWS In AWS we use a Network load balancer NLB to expose the Ingress Nginx Controller behind a Service of Type LoadBalancer info The provided templates illustrate the setup for legacy in tree service load balancer for AWS NLB AWS provides the documentation on how to use Network load balancing on Amazon EKS https docs aws amazon com eks latest userguide network load balancing html with AWS Load Balancer Controller https github com kubernetes sigs aws load balancer controller Network Load Balancer NLB console kubectl apply f https raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider aws deploy yaml TLS termination in AWS Load Balancer NLB By default TLS is terminated in the ingress controller But it is also possible to terminate TLS in the Load Balancer This section explains how to do that on AWS using an NLB 1 Download the deploy yaml https raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider aws nlb with tls termination deploy yaml template console wget https raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider aws nlb with tls termination deploy yaml 2 Edit the file and change the VPC CIDR in use for the Kubernetes cluster proxy real ip cidr XXX XXX XXX XX 3 Change the AWS Certificate Manager ACM ID as well arn aws acm us west 2 XXXXXXXX certificate XXXXXX XXXXXXX XXXXXXX XXXXXXXX 4 Deploy the manifest console kubectl apply f deploy yaml NLB Idle Timeouts Idle timeout value for TCP flows is 350 seconds and cannot be modified https docs aws amazon com elasticloadbalancing latest network network load balancers html connection idle timeout For this reason you need to ensure the keepalive timeout https nginx org en docs http ngx http core module html keepalive timeout value is configured less than 350 seconds to work as expected By default NGINX keepalive timeout is set to 75s More information with regard to timeouts can be found in the official AWS documentation https docs aws amazon com elasticloadbalancing latest network network load balancers html connection idle timeout GCE GKE First your user needs to have cluster admin permissions on the cluster This can be done with the following command console kubectl create clusterrolebinding cluster admin binding clusterrole cluster admin user gcloud config get value account Then the ingress controller can be installed like this console kubectl apply f https raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider cloud deploy yaml warning For private clusters you will need to either add a firewall rule that allows master nodes access to port 8443 tcp on worker nodes or change the existing rule that allows access to port 80 tcp 443 tcp and 10254 tcp to also allow access to port 8443 tcp More information can be found in the Official GCP Documentation https cloud google com load balancing docs tcp setting up tcp config hc firewall See the GKE documentation https cloud google com kubernetes engine docs how to private clusters add firewall rules on adding rules and the Kubernetes issue https github com kubernetes kubernetes issues 79739 for more detail Proxy protocol is supported in GCE check the Official Documentations on how to enable https cloud google com load balancing docs tcp setting up tcp proxy protocol Azure console kubectl apply f https raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider cloud deploy yaml More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation https docs microsoft com en us azure aks ingress internal ip create an ingress controller Digital Ocean console kubectl apply f https raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider do deploy yaml By default the service object of the ingress nginx controller for Digital Ocean only configures one annotation Its this one service beta kubernetes io do loadbalancer enable proxy protocol true While this makes the service functional it was reported that the Digital Ocean LoadBalancer graphs shows no data unless a few other annotations are also configured Some of these other annotations require values that can not be generic and hence not forced in a out of the box installation These annotations and a discussion on them is well documented in this issue https github com kubernetes ingress nginx issues 8965 Please refer to the issue to add annotations with values specific to user to get graphs of the DO LB populated with data Scaleway console kubectl apply f https raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider scw deploy yaml Refer to the dedicated tutorial https www scaleway com en docs tutorials proxy protocol v2 load balancer configuring proxy protocol for ingress nginx in the Scaleway documentation for configuring the proxy protocol for ingress nginx with the Scaleway load balancer Exoscale console kubectl apply f https raw githubusercontent com kubernetes ingress nginx main deploy static provider exoscale deploy yaml The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation https github com exoscale exoscale cloud controller manager blob master docs service loadbalancer md Oracle Cloud Infrastructure console kubectl apply f https raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider cloud deploy yaml A complete list of available annotations for Oracle Cloud Infrastructure https github com oracle oci cloud controller manager blob master docs load balancer annotations md can be found in the OCI Cloud Controller Manager https github com oracle oci cloud controller manager documentation OVHcloud console helm repo add ingress nginx https kubernetes github io ingress nginx helm repo update helm n ingress nginx install ingress nginx ingress nginx ingress nginx create namespace You can find the complete tutorial https docs ovh com gb en kubernetes installing nginx ingress Bare metal clusters This section is applicable to Kubernetes clusters deployed on bare metal servers as well as raw VMs where Kubernetes was installed manually using generic Linux distros like CentOS Ubuntu For quick testing you can use a NodePort https kubernetes io docs concepts services networking service type nodeport This should work on almost every cluster but it will typically use a port in the range 30000 32767 console kubectl apply f https raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider baremetal deploy yaml For more information about bare metal deployments and how to use port 80 instead of a random port in the 30000 32767 range see bare metal considerations baremetal md Miscellaneous Checking ingress controller version Run nginx ingress controller version within the pod for instance with kubectl exec console POD NAMESPACE ingress nginx POD NAME kubectl get pods n POD NAMESPACE l app kubernetes io name ingress nginx field selector status phase Running o name kubectl exec POD NAME n POD NAMESPACE nginx ingress controller version Scope By default the controller watches Ingress objects from all namespaces If you want to change this behavior use the flag watch namespace or check the Helm chart value controller scope to limit the controller to a single namespace Although the use of this flag is not popular one important fact to note is that the secret containing the default ssl certificate needs to also be present in the watched namespace s See also How to easily install multiple instances of the Ingress NGINX controller in the same cluster https kubernetes github io ingress nginx how to easily install multiple instances of the ingress nginx controller in the same cluster for more details Webhook network access warning The controller uses an admission webhook https kubernetes io docs reference access authn authz extensible admission controllers to validate Ingress definitions Make sure that you don t have Network policies https kubernetes io docs concepts services networking network policies or additional firewalls preventing connections from the API server to the ingress nginx controller admission service Certificate generation attention The first time the ingress controller starts two Jobs https kubernetes io docs concepts workloads controllers jobs run to completion create the SSL Certificate used by the admission webhook This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions You can wait until it is ready to run the next command yaml kubectl wait namespace ingress nginx for condition ready pod selector app kubernetes io component controller timeout 120s Running on Kubernetes versions older than 1 19 Ingress resources evolved over time They started with apiVersion extensions v1beta1 then moved to apiVersion networking k8s io v1beta1 and more recently to apiVersion networking k8s io v1 Here is how these Ingress versions are supported in Kubernetes before Kubernetes 1 19 only v1beta1 Ingress resources are supported from Kubernetes 1 19 to 1 21 both v1beta1 and v1 Ingress resources are supported in Kubernetes 1 22 and above only v1 Ingress resources are supported And here is how these Ingress versions are supported in Ingress Nginx Controller before version 1 0 only v1beta1 Ingress resources are supported in version 1 0 and above only v1 Ingress resources are As a result if you re running Kubernetes 1 19 or later you should be able to use the latest version of the NGINX Ingress Controller but if you re using an old version of Kubernetes 1 18 or earlier you will have to use version 0 X of the Ingress Nginx Controller e g version 0 49 The Helm chart of the Ingress Nginx Controller switched to version 1 in version 4 of the chart In other words if you re running Kubernetes 1 19 or earlier you should use version 3 X of the chart this can be done by adding version 4 to the helm install command |
ingress nginx suffices to provide a single point of contact to the Ingress Nginx Controller to external clients and indirectly to different setup to offer the same kind of access to external consumers Bare metal considerations In traditional cloud environments where network load balancers are available on demand a single Kubernetes manifest any application running inside the cluster Bare metal environments lack this commodity requiring a slightly | # Bare-metal considerations
In traditional *cloud* environments, where network load balancers are available on-demand, a single Kubernetes manifest
suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to
any application running inside the cluster. *Bare-metal* environments lack this commodity, requiring a slightly
different setup to offer the same kind of access to external consumers.


The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a
Kubernetes cluster running on bare-metal.
## A pure software solution: MetalLB
[MetalLB][metallb] provides a network load-balancer implementation for Kubernetes clusters that do not run on a
supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.
This section demonstrates how to use the [Layer 2 configuration mode][metallb-l2] of MetalLB together with the NGINX
Ingress controller in a Kubernetes cluster that has **publicly accessible nodes**. In this mode, one node attracts all
the traffic for the `ingress-nginx` Service IP. See [Traffic policies][metallb-trafficpolicies] for more details.

!!! note
The description of other supported configuration modes is off-scope for this document.
!!! warning
MetalLB is currently in *beta*. Read about the [Project maturity][metallb-maturity] and make sure you inform
yourself by reading the official documentation thoroughly.
MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB
was deployed following the [Installation][metallb-install] instructions, and that the Ingress-Nginx Controller was installed
using the steps described in the [quickstart section of the installation guide][install-quickstart].
MetalLB requires a pool of IP addresses in order to be able to take ownership of the `ingress-nginx` Service. This pool
can be defined through `IPAddressPool` objects in the same namespace as the MetalLB controller. This pool of IPs **must** be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
!!! example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal
environments this value is <None\>)
```console
$ kubectl get node
NAME STATUS ROLES EXTERNAL-IP
host-1 Ready master 203.0.113.1
host-2 Ready node 203.0.113.2
host-3 Ready node 203.0.113.3
```
After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates
the *loadBalancer* IP field of the `ingress-nginx` Service accordingly.
```yaml
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- 203.0.113.10-203.0.113.15
autoAssign: true
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default
```
```console
$ kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
default-http-backend ClusterIP 10.0.64.249 <none> 80/TCP
ingress-nginx LoadBalancer 10.0.220.217 203.0.113.10 80:30100/TCP,443:30101/TCP
```
As soon as MetalLB sets the external IP address of the `ingress-nginx` LoadBalancer Service, the corresponding entries
are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on
the ports configured in the LoadBalancer Service:
```console
$ curl -D- http://203.0.113.10 -H 'Host: myapp.example.com'
HTTP/1.1 200 OK
Server: nginx/1.15.2
```
!!! tip
In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the `Local`
traffic policy. Traffic policies are described in more details in [Traffic policies][metallb-trafficpolicies] as
well as in the next section.
[metallb]: https://metallb.universe.tf/
[metallb-maturity]: https://metallb.universe.tf/concepts/maturity/
[metallb-l2]: https://metallb.universe.tf/concepts/layer2/
[metallb-install]: https://metallb.universe.tf/installation/
[metallb-trafficpolicies]: https://metallb.universe.tf/usage/#traffic-policies
## Over a NodePort Service
Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the
[installation guide][install-baremetal].
!!! info
A Service of type `NodePort` exposes, via the `kube-proxy` component, the **same unprivileged** port (default:
30000-32767) on every Kubernetes node, masters included. For more information, see [Services][nodeport-def].
In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to
any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client
located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports
80 and 443. Instead, the external client must append the NodePort allocated to the `ingress-nginx` Service to HTTP
requests.

!!! example
Given the NodePort `30100` allocated to the `ingress-nginx` Service
```console
$ kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP PORT(S)
default-http-backend ClusterIP 10.0.64.249 80/TCP
ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP
```
and a Kubernetes node with the public IP address `203.0.113.2` (the external IP is added as an example, in most
bare-metal environments this value is <None\>)
```console
$ kubectl get node
NAME STATUS ROLES EXTERNAL-IP
host-1 Ready master 203.0.113.1
host-2 Ready node 203.0.113.2
host-3 Ready node 203.0.113.3
```
a client would reach an Ingress with `host: myapp.example.com` at `http://myapp.example.com:30100`, where the
myapp.example.com subdomain resolves to the 203.0.113.2 IP address.
!!! danger "Impact on the host system"
While it may sound tempting to reconfigure the NodePort range using the `--service-node-port-range` API server flag
to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues
including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant
`kube-proxy` privileges it may otherwise not require.
This practice is therefore **discouraged**. See the other approaches proposed in this page for alternatives.
This approach has a few other limitations one ought to be aware of:
* **Source IP address**
Services of type NodePort perform [source address translation][nodeport-nat] by default. This means the source IP of a
HTTP request is always **the IP address of the Kubernetes node that received the request** from the perspective of
NGINX.
The recommended way to preserve the source IP in a NodePort setup is to set the value of the `externalTrafficPolicy`
field of the `ingress-nginx` Service spec to `Local` ([example][preserve-ip]).
!!! warning
This setting effectively **drops packets** sent to Kubernetes nodes which are not running any instance of the NGINX
Ingress controller. Consider [assigning NGINX Pods to specific nodes][pod-assign] in order to control on what nodes
the Ingress-Nginx Controller should be scheduled or not scheduled.
!!! example
In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments
this value is <None\>)
```console
$ kubectl get node
NAME STATUS ROLES EXTERNAL-IP
host-1 Ready master 203.0.113.1
host-2 Ready node 203.0.113.2
host-3 Ready node 203.0.113.3
```
with a `ingress-nginx-controller` Deployment composed of 2 replicas
```console
$ kubectl -n ingress-nginx get pod -o wide
NAME READY STATUS IP NODE
default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2
ingress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3
ingress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2
```
Requests sent to `host-2` and `host-3` would be forwarded to NGINX and original client's IP would be preserved,
while requests to `host-1` would get dropped because there is no NGINX replica running on that node.
* **Ingress status**
Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller **does not
update the status of Ingress objects it manages**.
```console
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS
test-ingress myapp.example.com 80
```
Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible
to force the status update of all managed Ingress objects by setting the `externalIPs` field of the `ingress-nginx`
Service.
!!! warning
There is more to setting `externalIPs` than just enabling the Ingress-Nginx Controller to update the status of
Ingress objects. Please read about this option in the [Services][external-ips] page of official Kubernetes
documentation as well as the section about [External IPs](#external-ips) in this document for more information.
!!! example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal
environments this value is <None\>)
```console
$ kubectl get node
NAME STATUS ROLES EXTERNAL-IP
host-1 Ready master 203.0.113.1
host-2 Ready node 203.0.113.2
host-3 Ready node 203.0.113.3
```
one could edit the `ingress-nginx` Service and add the following field to the object spec
```yaml
spec:
externalIPs:
- 203.0.113.1
- 203.0.113.2
- 203.0.113.3
```
which would in turn be reflected on Ingress objects as follows:
```console
$ kubectl get ingress -o wide
NAME HOSTS ADDRESS PORTS
test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80
```
* **Redirects**
As NGINX is **not aware of the port translation operated by the NodePort Service**, backend applications are responsible
for generating redirect URLs that take into account the URL used by external clients, including the NodePort.
!!! example
Redirects generated by NGINX, for instance HTTP to HTTPS or `domain` to `www.domain`, are generated without
NodePort:
```console
$ curl -D- http://myapp.example.com:30100`
HTTP/1.1 308 Permanent Redirect
Server: nginx/1.15.2
Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect
```
[install-baremetal]: ./index.md#bare-metal
[install-quickstart]: ./index.md#quick-start
[nodeport-def]: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
[nodeport-nat]: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport
[pod-assign]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
[preserve-ip]: https://github.com/kubernetes/ingress-nginx/blob/nginx-0.19.0/deploy/provider/aws/service-nlb.yaml#L12-L14
## Via the host network
In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure
`ingress-nginx` Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of
this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network
interfaces, without the extra network translation imposed by NodePort Services.
!!! note
This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the `ingress-nginx`
Service exists in the target cluster, it is **recommended to delete it**.
This can be achieved by enabling the `hostNetwork` option in the Pods' spec.
```yaml
template:
spec:
hostNetwork: true
```
!!! danger "Security considerations"
Enabling this option **exposes every system daemon to the Ingress-Nginx Controller** on any network interface,
including the host's loopback. Please evaluate the impact this may have on the security of your system carefully.
!!! example
Consider this `ingress-nginx-controller` Deployment composed of 2 replicas, NGINX Pods inherit from the IP address
of their host instead of an internal Pod IP.
```console
$ kubectl -n ingress-nginx get pod -o wide
NAME READY STATUS IP NODE
default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2
ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3
ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2
```
One major limitation of this deployment approach is that only **a single Ingress-Nginx Controller Pod** may be scheduled
on each cluster node, because binding the same port multiple times on the same network interface is technically
impossible. Pods that are unschedulable due to such situation fail with the following event:
```console
$ kubectl -n ingress-nginx describe pod <unschedulable-ingress-nginx-controller-pod>
...
Events:
Type Reason From Message
---- ------ ---- -------
Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.
```
One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a *DaemonSet* instead
of a traditional Deployment.
!!! info
A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to
[repel those Pods][taints]. For more information, see [DaemonSet][daemonset].
Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the
configuration of the corresponding manifest at the user's discretion.

Like with NodePorts, this approach has a few quirks it is important to be aware of.
* **DNS resolution**
Pods configured with `hostNetwork: true` do not use the internal DNS resolver (i.e. *kube-dns* or *CoreDNS*), unless
their `dnsPolicy` spec field is set to [`ClusterFirstWithHostNet`][dnspolicy]. Consider using this setting if NGINX is
expected to resolve internal names for any reason.
* **Ingress status**
Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default
`--publish-service` flag used in standard cloud setups **does not apply** and the status of all Ingress objects remains
blank.
```console
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS
test-ingress myapp.example.com 80
```
Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the
[`--report-node-internal-ip-address`][cli-args] flag, which sets the status of all Ingress objects to the internal IP
address of all nodes running the Ingress-Nginx Controller.
!!! example
Given a `ingress-nginx-controller` DaemonSet composed of 2 replicas
```console
$ kubectl -n ingress-nginx get pod -o wide
NAME READY STATUS IP NODE
default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2
ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3
ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2
```
the controller sets the status of all Ingress objects it manages to the following value:
```console
$ kubectl get ingress -o wide
NAME HOSTS ADDRESS PORTS
test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80
```
!!! note
Alternatively, it is possible to override the address written to Ingress objects using the
`--publish-status-address` flag. See [Command line arguments][cli-args].
[taints]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
[daemonset]: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
[dnspolicy]: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
[cli-args]: ../user-guide/cli-arguments.md
## Using a self-provisioned edge
Similarly to cloud environments, this deployment approach requires an edge network component providing a public
entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software
(e.g. _HAproxy_) and is usually managed outside of the Kubernetes landscape by operations teams.
Such deployment builds upon the NodePort Service described above in [Over a NodePort Service](#over-a-nodeport-service),
with one significant difference: external clients do not access cluster nodes directly, only the edge component does.
This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.
On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes
nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort
on the target nodes as shown in the diagram below:

## External IPs
!!! danger "Source IP address"
This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore **not
recommended** to use it despite its apparent simplicity.
The `externalIPs` Service option was previously mentioned in the [NodePort](#over-a-nodeport-service) section.
As per the [Services][external-ips] page of the official Kubernetes documentation, the `externalIPs` option causes
`kube-proxy` to route traffic sent to arbitrary IP addresses **and on the Service ports** to the endpoints of that
Service. These IP addresses **must belong to the target node**.
!!! example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal
environments this value is <None\>)
```console
$ kubectl get node
NAME STATUS ROLES EXTERNAL-IP
host-1 Ready master 203.0.113.1
host-2 Ready node 203.0.113.2
host-3 Ready node 203.0.113.3
```
and the following `ingress-nginx` NodePort Service
```console
$ kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP PORT(S)
ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP
```
One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort
and the Service port:
```yaml
spec:
externalIPs:
- 203.0.113.2
- 203.0.113.3
```
```console
$ curl -D- http://myapp.example.com:30100
HTTP/1.1 200 OK
Server: nginx/1.15.2
$ curl -D- http://myapp.example.com
HTTP/1.1 200 OK
Server: nginx/1.15.2
```
We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.
[external-ips]: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips | ingress nginx | Bare metal considerations In traditional cloud environments where network load balancers are available on demand a single Kubernetes manifest suffices to provide a single point of contact to the Ingress Nginx Controller to external clients and indirectly to any application running inside the cluster Bare metal environments lack this commodity requiring a slightly different setup to offer the same kind of access to external consumers Cloud environment images baremetal cloud overview jpg Bare metal environment images baremetal baremetal overview jpg The rest of this document describes a few recommended approaches to deploying the Ingress Nginx Controller inside a Kubernetes cluster running on bare metal A pure software solution MetalLB MetalLB metallb provides a network load balancer implementation for Kubernetes clusters that do not run on a supported cloud provider effectively allowing the usage of LoadBalancer Services within any cluster This section demonstrates how to use the Layer 2 configuration mode metallb l2 of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes In this mode one node attracts all the traffic for the ingress nginx Service IP See Traffic policies metallb trafficpolicies for more details MetalLB in L2 mode images baremetal metallb jpg note The description of other supported configuration modes is off scope for this document warning MetalLB is currently in beta Read about the Project maturity metallb maturity and make sure you inform yourself by reading the official documentation thoroughly MetalLB can be deployed either with a simple Kubernetes manifest or with Helm The rest of this example assumes MetalLB was deployed following the Installation metallb install instructions and that the Ingress Nginx Controller was installed using the steps described in the quickstart section of the installation guide install quickstart MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress nginx Service This pool can be defined through IPAddressPool objects in the same namespace as the MetalLB controller This pool of IPs must be dedicated to MetalLB s use you can t reuse the Kubernetes node IPs or IPs handed out by a DHCP server example Given the following 3 node Kubernetes cluster the external IP is added as an example in most bare metal environments this value is None console kubectl get node NAME STATUS ROLES EXTERNAL IP host 1 Ready master 203 0 113 1 host 2 Ready node 203 0 113 2 host 3 Ready node 203 0 113 3 After creating the following objects MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress nginx Service accordingly yaml apiVersion metallb io v1beta1 kind IPAddressPool metadata name default namespace metallb system spec addresses 203 0 113 10 203 0 113 15 autoAssign true apiVersion metallb io v1beta1 kind L2Advertisement metadata name default namespace metallb system spec ipAddressPools default console kubectl n ingress nginx get svc NAME TYPE CLUSTER IP EXTERNAL IP PORT S default http backend ClusterIP 10 0 64 249 none 80 TCP ingress nginx LoadBalancer 10 0 220 217 203 0 113 10 80 30100 TCP 443 30101 TCP As soon as MetalLB sets the external IP address of the ingress nginx LoadBalancer Service the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service console curl D http 203 0 113 10 H Host myapp example com HTTP 1 1 200 OK Server nginx 1 15 2 tip In order to preserve the source IP address in HTTP requests sent to NGINX it is necessary to use the Local traffic policy Traffic policies are described in more details in Traffic policies metallb trafficpolicies as well as in the next section metallb https metallb universe tf metallb maturity https metallb universe tf concepts maturity metallb l2 https metallb universe tf concepts layer2 metallb install https metallb universe tf installation metallb trafficpolicies https metallb universe tf usage traffic policies Over a NodePort Service Due to its simplicity this is the setup a user will deploy by default when following the steps described in the installation guide install baremetal info A Service of type NodePort exposes via the kube proxy component the same unprivileged port default 30000 32767 on every Kubernetes node masters included For more information see Services nodeport def In this configuration the NGINX container remains isolated from the host network As a result it can safely bind to any port including the standard HTTP ports 80 and 443 However due to the container namespace isolation a client located outside the cluster network e g on the public internet is not able to access Ingress hosts directly on ports 80 and 443 Instead the external client must append the NodePort allocated to the ingress nginx Service to HTTP requests NodePort request flow images baremetal nodeport jpg example Given the NodePort 30100 allocated to the ingress nginx Service console kubectl n ingress nginx get svc NAME TYPE CLUSTER IP PORT S default http backend ClusterIP 10 0 64 249 80 TCP ingress nginx NodePort 10 0 220 217 80 30100 TCP 443 30101 TCP and a Kubernetes node with the public IP address 203 0 113 2 the external IP is added as an example in most bare metal environments this value is None console kubectl get node NAME STATUS ROLES EXTERNAL IP host 1 Ready master 203 0 113 1 host 2 Ready node 203 0 113 2 host 3 Ready node 203 0 113 3 a client would reach an Ingress with host myapp example com at http myapp example com 30100 where the myapp example com subdomain resolves to the 203 0 113 2 IP address danger Impact on the host system While it may sound tempting to reconfigure the NodePort range using the service node port range API server flag to include unprivileged ports and be able to expose ports 80 and 443 doing so may result in unexpected issues including but not limited to the use of ports otherwise reserved to system daemons and the necessity to grant kube proxy privileges it may otherwise not require This practice is therefore discouraged See the other approaches proposed in this page for alternatives This approach has a few other limitations one ought to be aware of Source IP address Services of type NodePort perform source address translation nodeport nat by default This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress nginx Service spec to Local example preserve ip warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller Consider assigning NGINX Pods to specific nodes pod assign in order to control on what nodes the Ingress Nginx Controller should be scheduled or not scheduled example In a Kubernetes cluster composed of 3 nodes the external IP is added as an example in most bare metal environments this value is None console kubectl get node NAME STATUS ROLES EXTERNAL IP host 1 Ready master 203 0 113 1 host 2 Ready node 203 0 113 2 host 3 Ready node 203 0 113 3 with a ingress nginx controller Deployment composed of 2 replicas console kubectl n ingress nginx get pod o wide NAME READY STATUS IP NODE default http backend 7c5bc89cc9 p86md 1 1 Running 172 17 1 1 host 2 ingress nginx controller cf9ff8c96 8vvf8 1 1 Running 172 17 0 3 host 3 ingress nginx controller cf9ff8c96 pxsds 1 1 Running 172 17 1 4 host 2 Requests sent to host 2 and host 3 would be forwarded to NGINX and original client s IP would be preserved while requests to host 1 would get dropped because there is no NGINX replica running on that node Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition the Ingress Nginx Controller does not update the status of Ingress objects it manages console kubectl get ingress NAME HOSTS ADDRESS PORTS test ingress myapp example com 80 Despite the fact there is no load balancer providing a public IP address to the Ingress Nginx Controller it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress nginx Service warning There is more to setting externalIPs than just enabling the Ingress Nginx Controller to update the status of Ingress objects Please read about this option in the Services external ips page of official Kubernetes documentation as well as the section about External IPs external ips in this document for more information example Given the following 3 node Kubernetes cluster the external IP is added as an example in most bare metal environments this value is None console kubectl get node NAME STATUS ROLES EXTERNAL IP host 1 Ready master 203 0 113 1 host 2 Ready node 203 0 113 2 host 3 Ready node 203 0 113 3 one could edit the ingress nginx Service and add the following field to the object spec yaml spec externalIPs 203 0 113 1 203 0 113 2 203 0 113 3 which would in turn be reflected on Ingress objects as follows console kubectl get ingress o wide NAME HOSTS ADDRESS PORTS test ingress myapp example com 203 0 113 1 203 0 113 2 203 0 113 3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service backend applications are responsible for generating redirect URLs that take into account the URL used by external clients including the NodePort example Redirects generated by NGINX for instance HTTP to HTTPS or domain to www domain are generated without NodePort console curl D http myapp example com 30100 HTTP 1 1 308 Permanent Redirect Server nginx 1 15 2 Location https myapp example com missing NodePort in HTTPS redirect install baremetal index md bare metal install quickstart index md quick start nodeport def https kubernetes io docs concepts services networking service type nodeport nodeport nat https kubernetes io docs tutorials services source ip source ip for services with type nodeport pod assign https kubernetes io docs concepts configuration assign pod node preserve ip https github com kubernetes ingress nginx blob nginx 0 19 0 deploy provider aws service nlb yaml L12 L14 Via the host network In a setup where there is no external load balancer available but using NodePorts is not an option one can configure ingress nginx Pods to use the network of the host they run on instead of a dedicated network namespace The benefit of this approach is that the Ingress Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes network interfaces without the extra network translation imposed by NodePort Services note This approach does not leverage any Service object to expose the Ingress Nginx Controller If the ingress nginx Service exists in the target cluster it is recommended to delete it This can be achieved by enabling the hostNetwork option in the Pods spec yaml template spec hostNetwork true danger Security considerations Enabling this option exposes every system daemon to the Ingress Nginx Controller on any network interface including the host s loopback Please evaluate the impact this may have on the security of your system carefully example Consider this ingress nginx controller Deployment composed of 2 replicas NGINX Pods inherit from the IP address of their host instead of an internal Pod IP console kubectl n ingress nginx get pod o wide NAME READY STATUS IP NODE default http backend 7c5bc89cc9 p86md 1 1 Running 172 17 1 1 host 2 ingress nginx controller 5b4cf5fc6 7lg6c 1 1 Running 203 0 113 3 host 3 ingress nginx controller 5b4cf5fc6 lzrls 1 1 Running 203 0 113 2 host 2 One major limitation of this deployment approach is that only a single Ingress Nginx Controller Pod may be scheduled on each cluster node because binding the same port multiple times on the same network interface is technically impossible Pods that are unschedulable due to such situation fail with the following event console kubectl n ingress nginx describe pod unschedulable ingress nginx controller pod Events Type Reason From Message Warning FailedScheduling default scheduler 0 3 nodes are available 3 node s didn t have free ports for the requested pod ports One way to ensure only schedulable Pods are created is to deploy the Ingress Nginx Controller as a DaemonSet instead of a traditional Deployment info A DaemonSet schedules exactly one type of Pod per cluster node masters included unless a node is configured to repel those Pods taints For more information see DaemonSet daemonset Because most properties of DaemonSet objects are identical to Deployment objects this documentation page leaves the configuration of the corresponding manifest at the user s discretion DaemonSet with hostNetwork flow images baremetal hostnetwork jpg Like with NodePorts this approach has a few quirks it is important to be aware of DNS resolution Pods configured with hostNetwork true do not use the internal DNS resolver i e kube dns or CoreDNS unless their dnsPolicy spec field is set to ClusterFirstWithHostNet dnspolicy Consider using this setting if NGINX is expected to resolve internal names for any reason Ingress status Because there is no Service exposing the Ingress Nginx Controller in a configuration using the host network the default publish service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank console kubectl get ingress NAME HOSTS ADDRESS PORTS test ingress myapp example com 80 Instead and because bare metal nodes usually don t have an ExternalIP one has to enable the report node internal ip address cli args flag which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress Nginx Controller example Given a ingress nginx controller DaemonSet composed of 2 replicas console kubectl n ingress nginx get pod o wide NAME READY STATUS IP NODE default http backend 7c5bc89cc9 p86md 1 1 Running 172 17 1 1 host 2 ingress nginx controller 5b4cf5fc6 7lg6c 1 1 Running 203 0 113 3 host 3 ingress nginx controller 5b4cf5fc6 lzrls 1 1 Running 203 0 113 2 host 2 the controller sets the status of all Ingress objects it manages to the following value console kubectl get ingress o wide NAME HOSTS ADDRESS PORTS test ingress myapp example com 203 0 113 2 203 0 113 3 80 note Alternatively it is possible to override the address written to Ingress objects using the publish status address flag See Command line arguments cli args taints https kubernetes io docs concepts configuration taint and toleration daemonset https kubernetes io docs concepts workloads controllers daemonset dnspolicy https kubernetes io docs concepts services networking dns pod service pod s dns policy cli args user guide cli arguments md Using a self provisioned edge Similarly to cloud environments this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster This edge component can be either hardware e g vendor appliance or software e g HAproxy and is usually managed outside of the Kubernetes landscape by operations teams Such deployment builds upon the NodePort Service described above in Over a NodePort Service over a nodeport service with one significant difference external clients do not access cluster nodes directly only the edge component does This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address On the edge side the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and or masters Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below User edge images baremetal user edge jpg External IPs danger Source IP address This method does not allow preserving the source IP of HTTP requests in any manner it is therefore not recommended to use it despite its apparent simplicity The externalIPs Service option was previously mentioned in the NodePort over a nodeport service section As per the Services external ips page of the official Kubernetes documentation the externalIPs option causes kube proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service These IP addresses must belong to the target node example Given the following 3 node Kubernetes cluster the external IP is added as an example in most bare metal environments this value is None console kubectl get node NAME STATUS ROLES EXTERNAL IP host 1 Ready master 203 0 113 1 host 2 Ready node 203 0 113 2 host 3 Ready node 203 0 113 3 and the following ingress nginx NodePort Service console kubectl n ingress nginx get svc NAME TYPE CLUSTER IP PORT S ingress nginx NodePort 10 0 220 217 80 30100 TCP 443 30101 TCP One could set the following external IPs in the Service spec and NGINX would become available on both the NodePort and the Service port yaml spec externalIPs 203 0 113 2 203 0 113 3 console curl D http myapp example com 30100 HTTP 1 1 200 OK Server nginx 1 15 2 curl D http myapp example com HTTP 1 1 200 OK Server nginx 1 15 2 We assume the myapp example com subdomain above resolves to both 203 0 113 2 and 203 0 113 3 IP addresses external ips https kubernetes io docs concepts services networking service external ips |
ingress nginx key cert pair with an arbitrarily chosen hostname created as follows Unless otherwise mentioned the TLS secret used in examples is a 2048 bit RSA console Many of the examples in this directory have common prerequisites Prerequisites TLS certificates | # Prerequisites
Many of the examples in this directory have common prerequisites.
## TLS certificates
Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA
key/cert pair with an arbitrarily chosen hostname, created as follows
```console
$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
Generating a 2048 bit RSA private key
................+++
................+++
writing new private key to 'tls.key'
-----
$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt
secret "tls-secret" created
```
Note: If using CA Authentication, described below, you will need to sign the server certificate with the CA.
## Client Certificate Authentication
CA Authentication also known as Mutual Authentication allows both the server and client to verify each others
identity via a common CA.
We have a CA Certificate which we usually obtain from a Certificate Authority and use that to sign
both our server certificate and client certificate. Then every time we want to access our backend, we must
pass the client certificate.
These instructions are based on the following [blog](https://medium.com/@awkwardferny/configuring-certificate-based-mutual-authentication-with-kubernetes-ingress-nginx-20e7e38fdfca)
**Generate the CA Key and Certificate:**
```console
openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 356 -nodes -subj '/CN=My Cert Authority'
```
**Generate the Server Key, and Certificate and Sign with the CA Certificate:**
```console
openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=mydomain.com'
openssl x509 -req -sha256 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
```
**Generate the Client Key, and Certificate and Sign with the CA Certificate:**
```console
openssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes -subj '/CN=My Client'
openssl x509 -req -sha256 -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt
```
Once this is complete you can continue to follow the instructions [here](./auth/client-certs/README.md#creating-certificate-secrets)
## Test HTTP Service
All examples that require a test HTTP Service use the standard http-svc pod,
which you can deploy as follows
```console
$ kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/http-svc.yaml
service "http-svc" created
replicationcontroller "http-svc" created
$ kubectl get po
NAME READY STATUS RESTARTS AGE
http-svc-p1t3t 1/1 Running 0 1d
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc 10.0.122.116 <pending> 80:30301/TCP 1d
```
You can test that the HTTP Service works by exposing it temporarily
```console
$ kubectl patch svc http-svc -p '{"spec":{"type": "LoadBalancer"}}'
"http-svc" patched
$ kubectl get svc http-svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc 10.0.122.116 <pending> 80:30301/TCP 1d
$ kubectl describe svc http-svc
Name: http-svc
Namespace: default
Labels: app=http-svc
Selector: app=http-svc
Type: LoadBalancer
IP: 10.0.122.116
LoadBalancer Ingress: 108.59.87.136
Port: http 80/TCP
NodePort: http 30301/TCP
Endpoints: 10.180.1.6:8080
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer
1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer
16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer
$ curl 108.59.87.136
CLIENT VALUES:
client_address=10.240.0.3
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://108.59.87.136:8080/
SERVER VALUES:
server_version=nginx: 1.9.11 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=108.59.87.136
user-agent=curl/7.46.0
BODY:
-no body in request-
$ kubectl patch svc http-svc -p '{"spec":{"type": "NodePort"}}'
"http-svc" patched
``` | ingress nginx | Prerequisites Many of the examples in this directory have common prerequisites TLS certificates Unless otherwise mentioned the TLS secret used in examples is a 2048 bit RSA key cert pair with an arbitrarily chosen hostname created as follows console openssl req x509 sha256 nodes days 365 newkey rsa 2048 keyout tls key out tls crt subj CN nginxsvc O nginxsvc Generating a 2048 bit RSA private key writing new private key to tls key kubectl create secret tls tls secret key tls key cert tls crt secret tls secret created Note If using CA Authentication described below you will need to sign the server certificate with the CA Client Certificate Authentication CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA We have a CA Certificate which we usually obtain from a Certificate Authority and use that to sign both our server certificate and client certificate Then every time we want to access our backend we must pass the client certificate These instructions are based on the following blog https medium com awkwardferny configuring certificate based mutual authentication with kubernetes ingress nginx 20e7e38fdfca Generate the CA Key and Certificate console openssl req x509 sha256 newkey rsa 4096 keyout ca key out ca crt days 356 nodes subj CN My Cert Authority Generate the Server Key and Certificate and Sign with the CA Certificate console openssl req new newkey rsa 4096 keyout server key out server csr nodes subj CN mydomain com openssl x509 req sha256 days 365 in server csr CA ca crt CAkey ca key set serial 01 out server crt Generate the Client Key and Certificate and Sign with the CA Certificate console openssl req new newkey rsa 4096 keyout client key out client csr nodes subj CN My Client openssl x509 req sha256 days 365 in client csr CA ca crt CAkey ca key set serial 02 out client crt Once this is complete you can continue to follow the instructions here auth client certs README md creating certificate secrets Test HTTP Service All examples that require a test HTTP Service use the standard http svc pod which you can deploy as follows console kubectl create f https raw githubusercontent com kubernetes ingress nginx main docs examples http svc yaml service http svc created replicationcontroller http svc created kubectl get po NAME READY STATUS RESTARTS AGE http svc p1t3t 1 1 Running 0 1d kubectl get svc NAME CLUSTER IP EXTERNAL IP PORT S AGE http svc 10 0 122 116 pending 80 30301 TCP 1d You can test that the HTTP Service works by exposing it temporarily console kubectl patch svc http svc p spec type LoadBalancer http svc patched kubectl get svc http svc NAME CLUSTER IP EXTERNAL IP PORT S AGE http svc 10 0 122 116 pending 80 30301 TCP 1d kubectl describe svc http svc Name http svc Namespace default Labels app http svc Selector app http svc Type LoadBalancer IP 10 0 122 116 LoadBalancer Ingress 108 59 87 136 Port http 80 TCP NodePort http 30301 TCP Endpoints 10 180 1 6 8080 Session Affinity None Events FirstSeen LastSeen Count From SubObjectPath Type Reason Message 1m 1m 1 service controller Normal Type ClusterIP LoadBalancer 1m 1m 1 service controller Normal CreatingLoadBalancer Creating load balancer 16s 16s 1 service controller Normal CreatedLoadBalancer Created load balancer curl 108 59 87 136 CLIENT VALUES client address 10 240 0 3 command GET real path query nil request version 1 1 request uri http 108 59 87 136 8080 SERVER VALUES server version nginx 1 9 11 lua 10001 HEADERS RECEIVED accept host 108 59 87 136 user agent curl 7 46 0 BODY no body in request kubectl patch svc http svc p spec type NodePort http svc patched |
ingress nginx This example demonstrates how to use annotations and that you have an ingress controller in your cluster You will need to make sure your Ingress targets exactly one Ingress Prerequisites Rewrite controller by specifying the | # Rewrite
This example demonstrates how to use `Rewrite` annotations.
## Prerequisites
You will need to make sure your Ingress targets exactly one Ingress
controller by specifying the [ingress.class annotation](../../user-guide/multiple-ingress.md),
and that you have an ingress controller [running](../../deploy/) in your cluster.
## Deployment
Rewriting can be controlled using the following annotations:
|Name|Description|Values|
| --- | --- | --- |
|nginx.ingress.kubernetes.io/rewrite-target|Target URI where the traffic must be redirected|string|
|nginx.ingress.kubernetes.io/ssl-redirect|Indicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate)|bool|
|nginx.ingress.kubernetes.io/force-ssl-redirect|Forces the redirection to HTTPS even if the Ingress is not TLS Enabled|bool|
|nginx.ingress.kubernetes.io/app-root|Defines the Application Root that the Controller must redirect if it's in `/` context|string|
|nginx.ingress.kubernetes.io/use-regex|Indicates if the paths defined on an Ingress use regular expressions|bool|
## Examples
### Rewrite Target
!!! attention
Starting in Version 0.22.0, ingress definitions using the annotation `nginx.ingress.kubernetes.io/rewrite-target` are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a [capture group](https://www.regular-expressions.info/refcapture.html).
!!! note
[Captured groups](https://www.regular-expressions.info/refcapture.html) are saved in numbered placeholders, chronologically, in the form `$1`, `$2` ... `$n`. These placeholders can be used as parameters in the `rewrite-target` annotation.
!!! note
Please see the [FAQ](../../faq.md#validation-of-path) for Validation Of __`path`__
Create an Ingress rule with a rewrite annotation:
```console
$ echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
ingressClassName: nginx
rules:
- host: rewrite.bar.com
http:
paths:
- path: /something(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: http-svc
port:
number: 80
' | kubectl create -f -
```
In this ingress definition, any characters captured by `(.*)` will be assigned to the placeholder `$2`, which is then used as a parameter in the `rewrite-target` annotation.
For example, the ingress definition above will result in the following rewrites:
- `rewrite.bar.com/something` rewrites to `rewrite.bar.com/`
- `rewrite.bar.com/something/` rewrites to `rewrite.bar.com/`
- `rewrite.bar.com/something/new` rewrites to `rewrite.bar.com/new`
### App Root
Create an Ingress rule with an app-root annotation:
```
$ echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/app-root: /app1
name: approot
namespace: default
spec:
ingressClassName: nginx
rules:
- host: approot.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
" | kubectl create -f -
```
Check the rewrite is working
```
$ curl -I -k http://approot.bar.com/
HTTP/1.1 302 Moved Temporarily
Server: nginx/1.11.10
Date: Mon, 13 Mar 2017 14:57:15 GMT
Content-Type: text/html
Content-Length: 162
Location: http://approot.bar.com/app1
Connection: keep-alive
``` | ingress nginx | Rewrite This example demonstrates how to use Rewrite annotations Prerequisites You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress class annotation user guide multiple ingress md and that you have an ingress controller running deploy in your cluster Deployment Rewriting can be controlled using the following annotations Name Description Values nginx ingress kubernetes io rewrite target Target URI where the traffic must be redirected string nginx ingress kubernetes io ssl redirect Indicates if the location section is only accessible via SSL defaults to True when Ingress contains a Certificate bool nginx ingress kubernetes io force ssl redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool nginx ingress kubernetes io app root Defines the Application Root that the Controller must redirect if it s in context string nginx ingress kubernetes io use regex Indicates if the paths defined on an Ingress use regular expressions bool Examples Rewrite Target attention Starting in Version 0 22 0 ingress definitions using the annotation nginx ingress kubernetes io rewrite target are not backwards compatible with previous versions In Version 0 22 0 and beyond any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group https www regular expressions info refcapture html note Captured groups https www regular expressions info refcapture html are saved in numbered placeholders chronologically in the form 1 2 n These placeholders can be used as parameters in the rewrite target annotation note Please see the FAQ faq md validation of path for Validation Of path Create an Ingress rule with a rewrite annotation console echo apiVersion networking k8s io v1 kind Ingress metadata annotations nginx ingress kubernetes io use regex true nginx ingress kubernetes io rewrite target 2 name rewrite namespace default spec ingressClassName nginx rules host rewrite bar com http paths path something pathType ImplementationSpecific backend service name http svc port number 80 kubectl create f In this ingress definition any characters captured by will be assigned to the placeholder 2 which is then used as a parameter in the rewrite target annotation For example the ingress definition above will result in the following rewrites rewrite bar com something rewrites to rewrite bar com rewrite bar com something rewrites to rewrite bar com rewrite bar com something new rewrites to rewrite bar com new App Root Create an Ingress rule with an app root annotation echo apiVersion networking k8s io v1 kind Ingress metadata annotations nginx ingress kubernetes io app root app1 name approot namespace default spec ingressClassName nginx rules host approot bar com http paths path pathType Prefix backend service name http svc port number 80 kubectl create f Check the rewrite is working curl I k http approot bar com HTTP 1 1 302 Moved Temporarily Server nginx 1 11 10 Date Mon 13 Mar 2017 14 57 15 GMT Content Type text html Content Length 162 Location http approot bar com app1 Connection keep alive |
ingress nginx Ingress Nginx Has the ability to handle canary routing by setting specific Create your main deployment and service This is the main deployment of your application with the service that will be Canary annotations the following is an example of how to configure a canary deployment with weighted canary routing used to route to it | # Canary
Ingress Nginx Has the ability to handle canary routing by setting specific
annotations, the following is an example of how to configure a canary
deployment with weighted canary routing.
## Create your main deployment and service
This is the main deployment of your application with the service that will be
used to route to it
```bash
echo "
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: production
labels:
app: production
spec:
replicas: 1
selector:
matchLabels:
app: production
template:
metadata:
labels:
app: production
spec:
containers:
- name: production
image: registry.k8s.io/ingress-nginx/e2e-test-echo:v1.0.1@sha256:1cec65aa768720290d05d65ab1c297ca46b39930e56bc9488259f9114fcd30e2
ports:
- containerPort: 80
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
---
# Service
apiVersion: v1
kind: Service
metadata:
name: production
labels:
app: production
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: production
" | kubectl apply -f -
```
## Create the canary deployment and service
This is the canary deployment that will take a weighted amount of requests
instead of the main deployment
```bash
echo "
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: canary
labels:
app: canary
spec:
replicas: 1
selector:
matchLabels:
app: canary
template:
metadata:
labels:
app: canary
spec:
containers:
- name: canary
image: registry.k8s.io/ingress-nginx/e2e-test-echo:v1.0.1@sha256:1cec65aa768720290d05d65ab1c297ca46b39930e56bc9488259f9114fcd30e2
ports:
- containerPort: 80
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
---
# Service
apiVersion: v1
kind: Service
metadata:
name: canary
labels:
app: canary
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: canary
" | kubectl apply -f -
```
## Create Ingress Pointing To Your Main Deployment
Next you will need to expose your main deployment with an ingress resource,
note there are no canary specific annotations on this ingress
```bash
echo "
---
# Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: production
annotations:
spec:
ingressClassName: nginx
rules:
- host: echo.prod.mydomain.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: production
port:
number: 80
" | kubectl apply -f -
```
## Create Ingress Pointing To Your Canary Deployment
You will then create an Ingress that has the canary specific configuration,
please pay special notice of the following:
- The host name is identical to the main ingress host name
- The `nginx.ingress.kubernetes.io/canary: "true"` annotation is required and
defines this as a canary annotation (if you do not have this the Ingresses
will clash)
- The `nginx.ingress.kubernetes.io/canary-weight: "50"` annotation dictates the
weight of the routing, in this case there is a "50%" chance a request will
hit the canary deployment over the main deployment
```bash
echo "
---
# Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: canary
annotations:
nginx.ingress.kubernetes.io/canary: \"true\"
nginx.ingress.kubernetes.io/canary-weight: \"50\"
spec:
ingressClassName: nginx
rules:
- host: echo.prod.mydomain.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: canary
port:
number: 80
" | kubectl apply -f -
```
## Testing your setup
You can use the following command to test your setup (replacing
INGRESS_CONTROLLER_IP with your ingresse controllers IP Address)
```bash
for i in $(seq 1 10); do curl -s --resolve echo.prod.mydomain.com:80:$INGRESS_CONTROLLER_IP echo.prod.mydomain.com | grep "Hostname"; done
```
You will get the following output showing that your canary setup is working as
expected:
```bash
Hostname: production-5c5f65d859-phqzc
Hostname: canary-6697778457-zkfjf
Hostname: canary-6697778457-zkfjf
Hostname: production-5c5f65d859-phqzc
Hostname: canary-6697778457-zkfjf
Hostname: production-5c5f65d859-phqzc
Hostname: production-5c5f65d859-phqzc
Hostname: production-5c5f65d859-phqzc
Hostname: canary-6697778457-zkfjf
Hostname: production-5c5f65d859-phqzc
``` | ingress nginx | Canary Ingress Nginx Has the ability to handle canary routing by setting specific annotations the following is an example of how to configure a canary deployment with weighted canary routing Create your main deployment and service This is the main deployment of your application with the service that will be used to route to it bash echo Deployment apiVersion apps v1 kind Deployment metadata name production labels app production spec replicas 1 selector matchLabels app production template metadata labels app production spec containers name production image registry k8s io ingress nginx e2e test echo v1 0 1 sha256 1cec65aa768720290d05d65ab1c297ca46b39930e56bc9488259f9114fcd30e2 ports containerPort 80 env name NODE NAME valueFrom fieldRef fieldPath spec nodeName name POD NAME valueFrom fieldRef fieldPath metadata name name POD NAMESPACE valueFrom fieldRef fieldPath metadata namespace name POD IP valueFrom fieldRef fieldPath status podIP Service apiVersion v1 kind Service metadata name production labels app production spec ports port 80 targetPort 80 protocol TCP name http selector app production kubectl apply f Create the canary deployment and service This is the canary deployment that will take a weighted amount of requests instead of the main deployment bash echo Deployment apiVersion apps v1 kind Deployment metadata name canary labels app canary spec replicas 1 selector matchLabels app canary template metadata labels app canary spec containers name canary image registry k8s io ingress nginx e2e test echo v1 0 1 sha256 1cec65aa768720290d05d65ab1c297ca46b39930e56bc9488259f9114fcd30e2 ports containerPort 80 env name NODE NAME valueFrom fieldRef fieldPath spec nodeName name POD NAME valueFrom fieldRef fieldPath metadata name name POD NAMESPACE valueFrom fieldRef fieldPath metadata namespace name POD IP valueFrom fieldRef fieldPath status podIP Service apiVersion v1 kind Service metadata name canary labels app canary spec ports port 80 targetPort 80 protocol TCP name http selector app canary kubectl apply f Create Ingress Pointing To Your Main Deployment Next you will need to expose your main deployment with an ingress resource note there are no canary specific annotations on this ingress bash echo Ingress apiVersion networking k8s io v1 kind Ingress metadata name production annotations spec ingressClassName nginx rules host echo prod mydomain com http paths pathType Prefix path backend service name production port number 80 kubectl apply f Create Ingress Pointing To Your Canary Deployment You will then create an Ingress that has the canary specific configuration please pay special notice of the following The host name is identical to the main ingress host name The nginx ingress kubernetes io canary true annotation is required and defines this as a canary annotation if you do not have this the Ingresses will clash The nginx ingress kubernetes io canary weight 50 annotation dictates the weight of the routing in this case there is a 50 chance a request will hit the canary deployment over the main deployment bash echo Ingress apiVersion networking k8s io v1 kind Ingress metadata name canary annotations nginx ingress kubernetes io canary true nginx ingress kubernetes io canary weight 50 spec ingressClassName nginx rules host echo prod mydomain com http paths pathType Prefix path backend service name canary port number 80 kubectl apply f Testing your setup You can use the following command to test your setup replacing INGRESS CONTROLLER IP with your ingresse controllers IP Address bash for i in seq 1 10 do curl s resolve echo prod mydomain com 80 INGRESS CONTROLLER IP echo prod mydomain com grep Hostname done You will get the following output showing that your canary setup is working as expected bash Hostname production 5c5f65d859 phqzc Hostname canary 6697778457 zkfjf Hostname canary 6697778457 zkfjf Hostname production 5c5f65d859 phqzc Hostname canary 6697778457 zkfjf Hostname production 5c5f65d859 phqzc Hostname production 5c5f65d859 phqzc Hostname production 5c5f65d859 phqzc Hostname canary 6697778457 zkfjf Hostname production 5c5f65d859 phqzc |
ingress nginx to a backend service This example demonstrates propagation of selected authentication service response headers Authentication logic is based on HTTP header requests with header containing string are considered authenticated Sample configuration includes Sample authentication service producing several response headers After successful authentication service generates response headers and External authentication authentication service response headers propagation | # External authentication, authentication service response headers propagation
This example demonstrates propagation of selected authentication service response headers
to a backend service.
Sample configuration includes:
* Sample authentication service producing several response headers
* Authentication logic is based on HTTP header: requests with header `User` containing string `internal` are considered authenticated
* After successful authentication service generates response headers `UserID` and `UserRole`
* Sample echo service displaying header information
* Two ingress objects pointing to echo service
* Public, which allows access from unauthenticated users
* Private, which allows access from authenticated users only
You can deploy the controller as
follows:
```console
$ kubectl create -f deploy/
deployment "demo-auth-service" created
service "demo-auth-service" created
ingress "demo-auth-service" created
deployment "demo-echo-service" created
service "demo-echo-service" created
ingress "public-demo-echo-service" created
ingress "secure-demo-echo-service" created
$ kubectl get po
NAME READY STATUS RESTARTS AGE
demo-auth-service-2769076528-7g9mh 1/1 Running 0 30s
demo-echo-service-3636052215-3vw8c 1/1 Running 0 29s
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
public-demo-echo-service public-demo-echo-service.kube.local 80 1m
secure-demo-echo-service secure-demo-echo-service.kube.local 80 1m
```
## Test 1: public service with no auth header
```console
$ curl -H 'Host: public-demo-echo-service.kube.local' -v 192.168.99.100
* Rebuilt URL to: 192.168.99.100/
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)
> GET / HTTP/1.1
> Host: public-demo-echo-service.kube.local
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.11.10
< Date: Mon, 13 Mar 2017 20:19:21 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 20
< Connection: keep-alive
<
* Connection #0 to host 192.168.99.100 left intact
UserID: , UserRole:
```
## Test 2: secure service with no auth header
```console
$ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192.168.99.100
* Rebuilt URL to: 192.168.99.100/
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)
> GET / HTTP/1.1
> Host: secure-demo-echo-service.kube.local
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< Server: nginx/1.11.10
< Date: Mon, 13 Mar 2017 20:18:48 GMT
< Content-Type: text/html
< Content-Length: 170
< Connection: keep-alive
<
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.11.10</center>
</body>
</html>
* Connection #0 to host 192.168.99.100 left intact
```
## Test 3: public service with valid auth header
```console
$ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100
* Rebuilt URL to: 192.168.99.100/
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)
> GET / HTTP/1.1
> Host: public-demo-echo-service.kube.local
> User-Agent: curl/7.43.0
> Accept: */*
> User:internal
>
< HTTP/1.1 200 OK
< Server: nginx/1.11.10
< Date: Mon, 13 Mar 2017 20:19:59 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 44
< Connection: keep-alive
<
* Connection #0 to host 192.168.99.100 left intact
UserID: 1443635317331776148, UserRole: admin
```
## Test 4: secure service with valid auth header
```console
$ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100
* Rebuilt URL to: 192.168.99.100/
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)
> GET / HTTP/1.1
> Host: secure-demo-echo-service.kube.local
> User-Agent: curl/7.43.0
> Accept: */*
> User:internal
>
< HTTP/1.1 200 OK
< Server: nginx/1.11.10
< Date: Mon, 13 Mar 2017 20:17:23 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 43
< Connection: keep-alive
<
* Connection #0 to host 192.168.99.100 left intact
UserID: 605394647632969758, UserRole: admin
``` | ingress nginx | External authentication authentication service response headers propagation This example demonstrates propagation of selected authentication service response headers to a backend service Sample configuration includes Sample authentication service producing several response headers Authentication logic is based on HTTP header requests with header User containing string internal are considered authenticated After successful authentication service generates response headers UserID and UserRole Sample echo service displaying header information Two ingress objects pointing to echo service Public which allows access from unauthenticated users Private which allows access from authenticated users only You can deploy the controller as follows console kubectl create f deploy deployment demo auth service created service demo auth service created ingress demo auth service created deployment demo echo service created service demo echo service created ingress public demo echo service created ingress secure demo echo service created kubectl get po NAME READY STATUS RESTARTS AGE demo auth service 2769076528 7g9mh 1 1 Running 0 30s demo echo service 3636052215 3vw8c 1 1 Running 0 29s kubectl get ing NAME HOSTS ADDRESS PORTS AGE public demo echo service public demo echo service kube local 80 1m secure demo echo service secure demo echo service kube local 80 1m Test 1 public service with no auth header console curl H Host public demo echo service kube local v 192 168 99 100 Rebuilt URL to 192 168 99 100 Trying 192 168 99 100 Connected to 192 168 99 100 192 168 99 100 port 80 0 GET HTTP 1 1 Host public demo echo service kube local User Agent curl 7 43 0 Accept HTTP 1 1 200 OK Server nginx 1 11 10 Date Mon 13 Mar 2017 20 19 21 GMT Content Type text plain charset utf 8 Content Length 20 Connection keep alive Connection 0 to host 192 168 99 100 left intact UserID UserRole Test 2 secure service with no auth header console curl H Host secure demo echo service kube local v 192 168 99 100 Rebuilt URL to 192 168 99 100 Trying 192 168 99 100 Connected to 192 168 99 100 192 168 99 100 port 80 0 GET HTTP 1 1 Host secure demo echo service kube local User Agent curl 7 43 0 Accept HTTP 1 1 403 Forbidden Server nginx 1 11 10 Date Mon 13 Mar 2017 20 18 48 GMT Content Type text html Content Length 170 Connection keep alive html head title 403 Forbidden title head body bgcolor white center h1 403 Forbidden h1 center hr center nginx 1 11 10 center body html Connection 0 to host 192 168 99 100 left intact Test 3 public service with valid auth header console curl H Host public demo echo service kube local H User internal v 192 168 99 100 Rebuilt URL to 192 168 99 100 Trying 192 168 99 100 Connected to 192 168 99 100 192 168 99 100 port 80 0 GET HTTP 1 1 Host public demo echo service kube local User Agent curl 7 43 0 Accept User internal HTTP 1 1 200 OK Server nginx 1 11 10 Date Mon 13 Mar 2017 20 19 59 GMT Content Type text plain charset utf 8 Content Length 44 Connection keep alive Connection 0 to host 192 168 99 100 left intact UserID 1443635317331776148 UserRole admin Test 4 secure service with valid auth header console curl H Host secure demo echo service kube local H User internal v 192 168 99 100 Rebuilt URL to 192 168 99 100 Trying 192 168 99 100 Connected to 192 168 99 100 192 168 99 100 port 80 0 GET HTTP 1 1 Host secure demo echo service kube local User Agent curl 7 43 0 Accept User internal HTTP 1 1 200 OK Server nginx 1 11 10 Date Mon 13 Mar 2017 20 17 23 GMT Content Type text plain charset utf 8 Content Length 43 Connection keep alive Connection 0 to host 192 168 99 100 left intact UserID 605394647632969758 UserRole admin |
ingress nginx You will also need to make sure your Ingress targets exactly one Ingress and that you have an ingress controller in your cluster Static IPs You need a and a for this example Prerequisites This example demonstrates how to assign a static ip to an Ingress on through the Ingress NGINX controller controller by specifying the | # Static IPs
This example demonstrates how to assign a static-ip to an Ingress on through the Ingress-NGINX controller.
## Prerequisites
You need a [TLS cert](../PREREQUISITES.md#tls-certificates) and a [test HTTP service](../PREREQUISITES.md#test-http-service) for this example.
You will also need to make sure your Ingress targets exactly one Ingress
controller by specifying the [ingress.class annotation](../../user-guide/multiple-ingress.md),
and that you have an ingress controller [running](../../deploy/) in your cluster.
## Acquiring an IP
Since instances of the ingress nginx controller actually run on nodes in your cluster,
by default nginx Ingresses will only get static IPs if your cloudprovider
supports static IP assignments to nodes. On GKE/GCE for example, even though
nodes get static IPs, the IPs are not retained across upgrades.
To acquire a static IP for the ingress-nginx-controller, simply put it
behind a Service of `Type=LoadBalancer`.
First, create a loadbalancer Service and wait for it to acquire an IP:
```console
$ kubectl create -f static-ip-svc.yaml
service "ingress-nginx-lb" created
$ kubectl get svc ingress-nginx-lb
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-lb 10.0.138.113 104.154.109.191 80:31457/TCP,443:32240/TCP 15m
```
Then, update the ingress controller so it adopts the static IP of the Service
by passing the `--publish-service` flag (the example yaml used in the next step
already has it set to "ingress-nginx-lb").
```console
$ kubectl create -f ingress-nginx-controller.yaml
deployment "ingress-nginx-controller" created
```
## Assigning the IP to an Ingress
From here on every Ingress created with the `ingress.class` annotation set to
`nginx` will get the IP allocated in the previous step.
```console
$ kubectl create -f ingress-nginx.yaml
ingress "ingress-nginx" created
$ kubectl get ing ingress-nginx
NAME HOSTS ADDRESS PORTS AGE
ingress-nginx * 104.154.109.191 80, 443 13m
$ curl 104.154.109.191 -kL
CLIENT VALUES:
client_address=10.180.1.25
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://104.154.109.191:8080/
...
```
## Retaining the IP
You can test retention by deleting the Ingress:
```console
$ kubectl delete ing ingress-nginx
ingress "ingress-nginx" deleted
$ kubectl create -f ingress-nginx.yaml
ingress "ingress-nginx" created
$ kubectl get ing ingress-nginx
NAME HOSTS ADDRESS PORTS AGE
ingress-nginx * 104.154.109.191 80, 443 13m
```
> Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all
> Ingresses, because all requests are proxied through the same set of nginx
> controllers.
## Promote ephemeral to static IP
To promote the allocated IP to static, you can update the Service manifest:
```console
$ kubectl patch svc ingress-nginx-lb -p '{"spec": {"loadBalancerIP": "104.154.109.191"}}'
"ingress-nginx-lb" patched
```
... and promote the IP to static (promotion works differently for cloudproviders,
provided example is for GKE/GCE):
```console
$ gcloud compute addresses create ingress-nginx-lb --addresses 104.154.109.191 --region us-central1
Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb].
---
address: 104.154.109.191
creationTimestamp: '2017-01-31T16:34:50.089-08:00'
description: ''
id: '5208037144487826373'
kind: compute#address
name: ingress-nginx-lb
region: us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb
status: IN_USE
users:
- us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000
```
Now even if the Service is deleted, the IP will persist, so you can recreate the
Service with `spec.loadBalancerIP` set to `104.154.109.191`. | ingress nginx | Static IPs This example demonstrates how to assign a static ip to an Ingress on through the Ingress NGINX controller Prerequisites You need a TLS cert PREREQUISITES md tls certificates and a test HTTP service PREREQUISITES md test http service for this example You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress class annotation user guide multiple ingress md and that you have an ingress controller running deploy in your cluster Acquiring an IP Since instances of the ingress nginx controller actually run on nodes in your cluster by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes On GKE GCE for example even though nodes get static IPs the IPs are not retained across upgrades To acquire a static IP for the ingress nginx controller simply put it behind a Service of Type LoadBalancer First create a loadbalancer Service and wait for it to acquire an IP console kubectl create f static ip svc yaml service ingress nginx lb created kubectl get svc ingress nginx lb NAME CLUSTER IP EXTERNAL IP PORT S AGE ingress nginx lb 10 0 138 113 104 154 109 191 80 31457 TCP 443 32240 TCP 15m Then update the ingress controller so it adopts the static IP of the Service by passing the publish service flag the example yaml used in the next step already has it set to ingress nginx lb console kubectl create f ingress nginx controller yaml deployment ingress nginx controller created Assigning the IP to an Ingress From here on every Ingress created with the ingress class annotation set to nginx will get the IP allocated in the previous step console kubectl create f ingress nginx yaml ingress ingress nginx created kubectl get ing ingress nginx NAME HOSTS ADDRESS PORTS AGE ingress nginx 104 154 109 191 80 443 13m curl 104 154 109 191 kL CLIENT VALUES client address 10 180 1 25 command GET real path query nil request version 1 1 request uri http 104 154 109 191 8080 Retaining the IP You can test retention by deleting the Ingress console kubectl delete ing ingress nginx ingress ingress nginx deleted kubectl create f ingress nginx yaml ingress ingress nginx created kubectl get ing ingress nginx NAME HOSTS ADDRESS PORTS AGE ingress nginx 104 154 109 191 80 443 13m Note that unlike the GCE Ingress the same loadbalancer IP is shared amongst all Ingresses because all requests are proxied through the same set of nginx controllers Promote ephemeral to static IP To promote the allocated IP to static you can update the Service manifest console kubectl patch svc ingress nginx lb p spec loadBalancerIP 104 154 109 191 ingress nginx lb patched and promote the IP to static promotion works differently for cloudproviders provided example is for GKE GCE console gcloud compute addresses create ingress nginx lb addresses 104 154 109 191 region us central1 Created https www googleapis com compute v1 projects kubernetesdev regions us central1 addresses ingress nginx lb address 104 154 109 191 creationTimestamp 2017 01 31T16 34 50 089 08 00 description id 5208037144487826373 kind compute address name ingress nginx lb region us central1 selfLink https www googleapis com compute v1 projects kubernetesdev regions us central1 addresses ingress nginx lb status IN USE users us central1 forwardingRules a09f6913ae80e11e6a8c542010af0000 Now even if the Service is deleted the IP will persist so you can recreate the Service with spec loadBalancerIP set to 104 154 109 191 |
ingress nginx 1 You have a kubernetes cluster running This example demonstrates how to route traffic to a gRPC service through the Ingress NGINX controller 3 You have the ingress nginx controller installed as per docs 4 You have a backend application running a gRPC server listening for TCP traffic If you want you can use https github com grpc grpc go blob 91e0aeb192456225adf27966d04ada4cf8599915 examples features reflection server main goas an example Prerequisites gRPC 2 You have a domain name such as that is configured to route traffic to the Ingress NGINX controller | # gRPC
This example demonstrates how to route traffic to a gRPC service through the Ingress-NGINX controller.
## Prerequisites
1. You have a kubernetes cluster running.
2. You have a domain name such as `example.com` that is configured to route traffic to the Ingress-NGINX controller.
3. You have the ingress-nginx-controller installed as per docs.
4. You have a backend application running a gRPC server listening for TCP traffic. If you want, you can use <https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go> as an example.
5. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type `tls`, in the same namespace as the gRPC application.
### Step 1: Create a Kubernetes `Deployment` for gRPC app
- Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:
```console
$ kubectl get po -A -o wide | grep go-grpc-greeter-server
```
- If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.
- As an example gRPC application, we can use this app <https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go>.
- To create a container image for this app, you can use [this Dockerfile](https://github.com/kubernetes/ingress-nginx/blob/main/images/go-grpc-greeter-server/rootfs/Dockerfile).
- If you use the Dockerfile mentioned above, to create a image, then you can use the following example Kubernetes manifest to create a deployment resource that uses that image. If necessary edit this manifest to suit your needs.
```
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: go-grpc-greeter-server
name: go-grpc-greeter-server
spec:
replicas: 1
selector:
matchLabels:
app: go-grpc-greeter-server
template:
metadata:
labels:
app: go-grpc-greeter-server
spec:
containers:
- image: <reponame>/go-grpc-greeter-server # Edit this for your reponame
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 50m
memory: 50Mi
name: go-grpc-greeter-server
ports:
- containerPort: 50051
EOF
```
### Step 2: Create the Kubernetes `Service` for the gRPC app
- You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod.
```
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
labels:
app: go-grpc-greeter-server
name: go-grpc-greeter-server
spec:
ports:
- port: 80
protocol: TCP
targetPort: 50051
selector:
app: go-grpc-greeter-server
type: ClusterIP
EOF
```
- You can save the above example manifest to a file with name `service.go-grpc-greeter-server.yaml` and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this:
```
$ kubectl create -f service.go-grpc-greeter-server.yaml
```
### Step 3: Create the Kubernetes `Ingress` resource for the gRPC app
- Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type "kubernetes.io/tls" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress.
```
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
name: fortune-ingress
namespace: default
spec:
ingressClassName: nginx
rules:
- host: grpctest.dev.mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: go-grpc-greeter-server
port:
number: 80
tls:
# This secret must exist beforehand
# The cert must also contain the subj-name grpctest.dev.mydomain.com
# https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/PREREQUISITES.md#tls-certificates
- secretName: wildcard.dev.mydomain.com
hosts:
- grpctest.dev.mydomain.com
EOF
```
- If you save the above example manifest as a file named `ingress.go-grpc-greeter-server.yaml` and edit it to match your deployment and service, you can create the ingress like this:
```
$ kubectl create -f ingress.go-grpc-greeter-server.yaml
```
- The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, gRPC traffic will travel unencrypted inside the cluster and arrive "insecure").
- For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation `nginx.ingress.kubernetes.io/backend-protocol: "GRPCS"`.
- A few more things to note:
- We've tagged the ingress with the annotation `nginx.ingress.kubernetes.io/backend-protocol: "GRPC"`. This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.
- We're terminating TLS at the ingress and have configured an SSL certificate `wildcard.dev.mydomain.com`. The ingress matches traffic arriving as `https://grpctest.dev.mydomain.com:443` and routes unencrypted messages to the backend Kubernetes service.
### Step 4: test the connection
- Once we've applied our configuration to Kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the [grpcurl](https://github.com/fullstorydev/grpcurl) utility:
```
$ grpcurl grpctest.dev.mydomain.com:443 helloworld.Greeter/SayHello
{
"message": "Hello "
}
```
### Debugging Hints
1. Obviously, watch the logs on your app.
2. Watch the logs for the ingress-nginx-controller (increasing verbosity as
needed).
3. Double-check your address and ports.
4. Set the `GODEBUG=http2debug=2` environment variable to get detailed http/2
logging on the client and/or server.
5. Study RFC 7540 (http/2) <https://tools.ietf.org/html/rfc7540>.
> If you are developing public gRPC endpoints, check out
> https://proto.stack.build, a protocol buffer / gRPC build service that can use
> to help make it easier for your users to consume your API.
> See also the specific gRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html
### Notes on using response/request streams
> `grpc_read_timeout` and `grpc_send_timeout` will be set as `proxy_read_timeout` and `proxy_send_timeout` when you set backend protocol to `GRPC` or `GRPCS`.
1. If your server only does response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the `grpc_read_timeout` to accommodate this.
2. If your service only does request streaming and you expect a stream to be open longer than 60 seconds, you have to change the
`grpc_send_timeout` and the `client_body_timeout`.
3. If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: `grpc_read_timeout`, `grpc_send_timeout` and `client_body_timeout`. | ingress nginx | gRPC This example demonstrates how to route traffic to a gRPC service through the Ingress NGINX controller Prerequisites 1 You have a kubernetes cluster running 2 You have a domain name such as example com that is configured to route traffic to the Ingress NGINX controller 3 You have the ingress nginx controller installed as per docs 4 You have a backend application running a gRPC server listening for TCP traffic If you want you can use https github com grpc grpc go blob 91e0aeb192456225adf27966d04ada4cf8599915 examples features reflection server main go as an example 5 You re also responsible for provisioning an SSL certificate for the ingress So you need to have a valid SSL certificate deployed as a Kubernetes secret of type tls in the same namespace as the gRPC application Step 1 Create a Kubernetes Deployment for gRPC app Make sure your gRPC application pod is running and listening for connections For example you can try a kubectl command like this below console kubectl get po A o wide grep go grpc greeter server If you have a gRPC app deployed in your cluster then skip further notes in this Step 1 and continue from Step 2 below As an example gRPC application we can use this app https github com grpc grpc go blob 91e0aeb192456225adf27966d04ada4cf8599915 examples features reflection server main go To create a container image for this app you can use this Dockerfile https github com kubernetes ingress nginx blob main images go grpc greeter server rootfs Dockerfile If you use the Dockerfile mentioned above to create a image then you can use the following example Kubernetes manifest to create a deployment resource that uses that image If necessary edit this manifest to suit your needs cat EOF kubectl apply f apiVersion apps v1 kind Deployment metadata labels app go grpc greeter server name go grpc greeter server spec replicas 1 selector matchLabels app go grpc greeter server template metadata labels app go grpc greeter server spec containers image reponame go grpc greeter server Edit this for your reponame resources limits cpu 100m memory 100Mi requests cpu 50m memory 50Mi name go grpc greeter server ports containerPort 50051 EOF Step 2 Create the Kubernetes Service for the gRPC app You can use the following example manifest to create a service of type ClusterIP Edit the name namespace label port to match your deployment pod cat EOF kubectl apply f apiVersion v1 kind Service metadata labels app go grpc greeter server name go grpc greeter server spec ports port 80 protocol TCP targetPort 50051 selector app go grpc greeter server type ClusterIP EOF You can save the above example manifest to a file with name service go grpc greeter server yaml and edit it to match your deployment pod if required You can create the service resource with a kubectl command like this kubectl create f service go grpc greeter server yaml Step 3 Create the Kubernetes Ingress resource for the gRPC app Use the following example manifest of a ingress resource to create a ingress for your grpc app If required edit it to match your app s details like name namespace service secret etc Make sure you have the required SSL Certificate existing in your Kubernetes cluster in the same namespace where the gRPC app is The certificate must be available as a kubernetes secret resource of type kubernetes io tls https kubernetes io docs concepts configuration secret tls secrets This is because we are terminating TLS on the ingress cat EOF kubectl apply f apiVersion networking k8s io v1 kind Ingress metadata annotations nginx ingress kubernetes io ssl redirect true nginx ingress kubernetes io backend protocol GRPC name fortune ingress namespace default spec ingressClassName nginx rules host grpctest dev mydomain com http paths path pathType Prefix backend service name go grpc greeter server port number 80 tls This secret must exist beforehand The cert must also contain the subj name grpctest dev mydomain com https github com kubernetes ingress nginx blob master docs examples PREREQUISITES md tls certificates secretName wildcard dev mydomain com hosts grpctest dev mydomain com EOF If you save the above example manifest as a file named ingress go grpc greeter server yaml and edit it to match your deployment and service you can create the ingress like this kubectl create f ingress go grpc greeter server yaml The takeaway is that we are not doing any TLS configuration on the server as we are terminating TLS at the ingress level gRPC traffic will travel unencrypted inside the cluster and arrive insecure For your own application you may or may not want to do this If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself add the ingress annotation nginx ingress kubernetes io backend protocol GRPCS A few more things to note We ve tagged the ingress with the annotation nginx ingress kubernetes io backend protocol GRPC This is the magic ingredient that sets up the appropriate nginx configuration to route http 2 traffic to our service We re terminating TLS at the ingress and have configured an SSL certificate wildcard dev mydomain com The ingress matches traffic arriving as https grpctest dev mydomain com 443 and routes unencrypted messages to the backend Kubernetes service Step 4 test the connection Once we ve applied our configuration to Kubernetes it s time to test that we can actually talk to the backend To do this we ll use the grpcurl https github com fullstorydev grpcurl utility grpcurl grpctest dev mydomain com 443 helloworld Greeter SayHello message Hello Debugging Hints 1 Obviously watch the logs on your app 2 Watch the logs for the ingress nginx controller increasing verbosity as needed 3 Double check your address and ports 4 Set the GODEBUG http2debug 2 environment variable to get detailed http 2 logging on the client and or server 5 Study RFC 7540 http 2 https tools ietf org html rfc7540 If you are developing public gRPC endpoints check out https proto stack build a protocol buffer gRPC build service that can use to help make it easier for your users to consume your API See also the specific gRPC settings of NGINX https nginx org en docs http ngx http grpc module html Notes on using response request streams grpc read timeout and grpc send timeout will be set as proxy read timeout and proxy send timeout when you set backend protocol to GRPC or GRPCS 1 If your server only does response streaming and you expect a stream to be open longer than 60 seconds you will have to change the grpc read timeout to accommodate this 2 If your service only does request streaming and you expect a stream to be open longer than 60 seconds you have to change the grpc send timeout and the client body timeout 3 If you do both response and request streaming with an open stream longer than 60 seconds you have to change all three timeouts grpc read timeout grpc send timeout and client body timeout |
ingress nginx ingress external auth created Example 1 External Basic Authentication Use an external service Basic Auth located in kubectl create f ingress yaml | # External Basic Authentication
### Example 1
Use an external service (Basic Auth) located in `https://httpbin.org`
```
$ kubectl create -f ingress.yaml
ingress "external-auth" created
$ kubectl get ing external-auth
NAME HOSTS ADDRESS PORTS AGE
external-auth external-auth-01.sample.com 172.17.4.99 80 13s
$ kubectl get ing external-auth -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd
creationTimestamp: 2016-10-03T13:50:35Z
generation: 1
name: external-auth
namespace: default
resourceVersion: "2068378"
selfLink: /apis/networking/v1/namespaces/default/ingresses/external-auth
uid: 5c388f1d-8970-11e6-9004-080027d2dc94
spec:
rules:
- host: external-auth-01.sample.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
status:
loadBalancer:
ingress:
- ip: 172.17.4.99
$
```
## Test 1: no username/password (expect code 401)
```console
$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com'
* Rebuilt URL to: http://172.17.4.99/
* Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
> GET / HTTP/1.1
> Host: external-auth-01.sample.com
> User-Agent: curl/7.50.1
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Server: nginx/1.11.3
< Date: Mon, 03 Oct 2016 14:52:08 GMT
< Content-Type: text/html
< Content-Length: 195
< Connection: keep-alive
< WWW-Authenticate: Basic realm="Fake Realm"
<
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.11.3</center>
</body>
</html>
* Connection #0 to host 172.17.4.99 left intact
```
## Test 2: valid username/password (expect code 200)
```
$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd'
* Rebuilt URL to: http://172.17.4.99/
* Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
* Server auth using Basic with user 'user'
> GET / HTTP/1.1
> Host: external-auth-01.sample.com
> Authorization: Basic dXNlcjpwYXNzd2Q=
> User-Agent: curl/7.50.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.11.3
< Date: Mon, 03 Oct 2016 14:52:50 GMT
< Content-Type: text/plain
< Transfer-Encoding: chunked
< Connection: keep-alive
<
CLIENT VALUES:
client_address=10.2.60.2
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://external-auth-01.sample.com:8080/
SERVER VALUES:
server_version=nginx: 1.9.11 - lua: 10001
HEADERS RECEIVED:
accept=*/*
authorization=Basic dXNlcjpwYXNzd2Q=
connection=close
host=external-auth-01.sample.com
user-agent=curl/7.50.1
x-forwarded-for=10.2.60.1
x-forwarded-host=external-auth-01.sample.com
x-forwarded-port=80
x-forwarded-proto=http
x-real-ip=10.2.60.1
BODY:
* Connection #0 to host 172.17.4.99 left intact
-no body in request-
```
## Test 3: invalid username/password (expect code 401)
```
curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user'
* Rebuilt URL to: http://172.17.4.99/
* Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
* Server auth using Basic with user 'user'
> GET / HTTP/1.1
> Host: external-auth-01.sample.com
> Authorization: Basic dXNlcjp1c2Vy
> User-Agent: curl/7.50.1
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Server: nginx/1.11.3
< Date: Mon, 03 Oct 2016 14:53:04 GMT
< Content-Type: text/html
< Content-Length: 195
< Connection: keep-alive
* Authentication problem. Ignoring this.
< WWW-Authenticate: Basic realm="Fake Realm"
<
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.11.3</center>
</body>
</html>
* Connection #0 to host 172.17.4.99 left intact
``` | ingress nginx | External Basic Authentication Example 1 Use an external service Basic Auth located in https httpbin org kubectl create f ingress yaml ingress external auth created kubectl get ing external auth NAME HOSTS ADDRESS PORTS AGE external auth external auth 01 sample com 172 17 4 99 80 13s kubectl get ing external auth o yaml apiVersion networking k8s io v1 kind Ingress metadata annotations nginx ingress kubernetes io auth url https httpbin org basic auth user passwd creationTimestamp 2016 10 03T13 50 35Z generation 1 name external auth namespace default resourceVersion 2068378 selfLink apis networking v1 namespaces default ingresses external auth uid 5c388f1d 8970 11e6 9004 080027d2dc94 spec rules host external auth 01 sample com http paths path pathType Prefix backend service name http svc port number 80 status loadBalancer ingress ip 172 17 4 99 Test 1 no username password expect code 401 console curl k http 172 17 4 99 v H Host external auth 01 sample com Rebuilt URL to http 172 17 4 99 Trying 172 17 4 99 Connected to 172 17 4 99 172 17 4 99 port 80 0 GET HTTP 1 1 Host external auth 01 sample com User Agent curl 7 50 1 Accept HTTP 1 1 401 Unauthorized Server nginx 1 11 3 Date Mon 03 Oct 2016 14 52 08 GMT Content Type text html Content Length 195 Connection keep alive WWW Authenticate Basic realm Fake Realm html head title 401 Authorization Required title head body bgcolor white center h1 401 Authorization Required h1 center hr center nginx 1 11 3 center body html Connection 0 to host 172 17 4 99 left intact Test 2 valid username password expect code 200 curl k http 172 17 4 99 v H Host external auth 01 sample com u user passwd Rebuilt URL to http 172 17 4 99 Trying 172 17 4 99 Connected to 172 17 4 99 172 17 4 99 port 80 0 Server auth using Basic with user user GET HTTP 1 1 Host external auth 01 sample com Authorization Basic dXNlcjpwYXNzd2Q User Agent curl 7 50 1 Accept HTTP 1 1 200 OK Server nginx 1 11 3 Date Mon 03 Oct 2016 14 52 50 GMT Content Type text plain Transfer Encoding chunked Connection keep alive CLIENT VALUES client address 10 2 60 2 command GET real path query nil request version 1 1 request uri http external auth 01 sample com 8080 SERVER VALUES server version nginx 1 9 11 lua 10001 HEADERS RECEIVED accept authorization Basic dXNlcjpwYXNzd2Q connection close host external auth 01 sample com user agent curl 7 50 1 x forwarded for 10 2 60 1 x forwarded host external auth 01 sample com x forwarded port 80 x forwarded proto http x real ip 10 2 60 1 BODY Connection 0 to host 172 17 4 99 left intact no body in request Test 3 invalid username password expect code 401 curl k http 172 17 4 99 v H Host external auth 01 sample com u user user Rebuilt URL to http 172 17 4 99 Trying 172 17 4 99 Connected to 172 17 4 99 172 17 4 99 port 80 0 Server auth using Basic with user user GET HTTP 1 1 Host external auth 01 sample com Authorization Basic dXNlcjp1c2Vy User Agent curl 7 50 1 Accept HTTP 1 1 401 Unauthorized Server nginx 1 11 3 Date Mon 03 Oct 2016 14 53 04 GMT Content Type text html Content Length 195 Connection keep alive Authentication problem Ignoring this WWW Authenticate Basic realm Fake Realm html head title 401 Authorization Required title head body bgcolor white center h1 401 Authorization Required h1 center hr center nginx 1 11 3 center body html Connection 0 to host 172 17 4 99 left intact |
ingress nginx External OAUTH Authentication This annotation requires or greater authentication provider to protect your Ingress resources Important The and annotations allow you to use an external Overview | # External OAUTH Authentication
### Overview
The `auth-url` and `auth-signin` annotations allow you to use an external
authentication provider to protect your Ingress resources.
!!! Important
This annotation requires `ingress-nginx-controller v0.9.0` or greater.
### Key Detail
This functionality is enabled by deploying multiple Ingress objects for a single host.
One Ingress object has no special annotations and handles authentication.
Other Ingress objects can then be annotated in such a way that require the user to
authenticate against the first Ingress's endpoint, and can redirect `401`s to the
same endpoint.
Sample:
```yaml
...
metadata:
name: application
annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
...
```
### Example: OAuth2 Proxy + Kubernetes-Dashboard
This example will show you how to deploy [`oauth2_proxy`](https://github.com/pusher/oauth2_proxy)
into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.
#### Prepare
1. Install the kubernetes dashboard
```console
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml
```
2. Create a [custom GitHub OAuth application](https://github.com/settings/applications/new)

- Homepage URL is the FQDN in the Ingress rule, like `https://foo.bar.com`
- Authorization callback URL is the same as the base FQDN plus `/oauth2/callback`, like `https://foo.bar.com/oauth2/callback`

3. Configure values in the file [`oauth2-proxy.yaml`](https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/auth/oauth-external-auth/oauth2-proxy.yaml) with the values:
- OAUTH2_PROXY_CLIENT_ID with the github `<Client ID>`
- OAUTH2_PROXY_CLIENT_SECRET with the github `<Client Secret>`
- OAUTH2_PROXY_COOKIE_SECRET with value of `python -c 'import os,base64; print(base64.b64encode(os.urandom(16)).decode("ascii"))'`
- (optional, but recommended) OAUTH2_PROXY_GITHUB_USERS with GitHub usernames to allow to login
- `__INGRESS_HOST__` with a valid FQDN (e.g. `foo.bar.com`)
- `__INGRESS_SECRET__` with a Secret with a valid SSL certificate
4. Deploy the oauth2 proxy and the ingress rules by running:
```console
$ kubectl create -f oauth2-proxy.yaml
```
#### Test
Test the integration by accessing the configured URL, e.g. `https://foo.bar.com`



### Example: Vouch Proxy + Kubernetes-Dashboard
This example will show you how to deploy [`Vouch Proxy`](https://github.com/vouch/vouch-proxy)
into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.
#### Prepare
1. Install the kubernetes dashboard
```console
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml
```
2. Create a [custom GitHub OAuth application](https://github.com/settings/applications/new)

- Homepage URL is the FQDN in the Ingress rule, like `https://foo.bar.com`
- Authorization callback URL is the same as the base FQDN plus `/oauth2/auth`, like `https://foo.bar.com/oauth2/auth`

3. Configure Vouch Proxy values in the file [`vouch-proxy.yaml`](https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/auth/oauth-external-auth/vouch-proxy.yaml) with the values:
- VOUCH_COOKIE_DOMAIN with value of `<Ingress Host>`
- OAUTH_CLIENT_ID with the github `<Client ID>`
- OAUTH_CLIENT_SECRET with the github `<Client Secret>`
- (optional, but recommended) VOUCH_WHITELIST with GitHub usernames to allow to login
- `__INGRESS_HOST__` with a valid FQDN (e.g. `foo.bar.com`)
- `__INGRESS_SECRET__` with a Secret with a valid SSL certificate
4. Deploy Vouch Proxy and the ingress rules by running:
```console
$ kubectl create -f vouch-proxy.yaml
```
#### Test
Test the integration by accessing the configured URL, e.g. `https://foo.bar.com`


 | ingress nginx | External OAUTH Authentication Overview The auth url and auth signin annotations allow you to use an external authentication provider to protect your Ingress resources Important This annotation requires ingress nginx controller v0 9 0 or greater Key Detail This functionality is enabled by deploying multiple Ingress objects for a single host One Ingress object has no special annotations and handles authentication Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress s endpoint and can redirect 401 s to the same endpoint Sample yaml metadata name application annotations nginx ingress kubernetes io auth url https host oauth2 auth nginx ingress kubernetes io auth signin https host oauth2 start rd escaped request uri Example OAuth2 Proxy Kubernetes Dashboard This example will show you how to deploy oauth2 proxy https github com pusher oauth2 proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider Prepare 1 Install the kubernetes dashboard console kubectl create f https raw githubusercontent com kubernetes kops master addons kubernetes dashboard v1 10 1 yaml 2 Create a custom GitHub OAuth application https github com settings applications new Register OAuth2 Application images register oauth app png Homepage URL is the FQDN in the Ingress rule like https foo bar com Authorization callback URL is the same as the base FQDN plus oauth2 callback like https foo bar com oauth2 callback Register OAuth2 Application images register oauth app 2 png 3 Configure values in the file oauth2 proxy yaml https raw githubusercontent com kubernetes ingress nginx main docs examples auth oauth external auth oauth2 proxy yaml with the values OAUTH2 PROXY CLIENT ID with the github Client ID OAUTH2 PROXY CLIENT SECRET with the github Client Secret OAUTH2 PROXY COOKIE SECRET with value of python c import os base64 print base64 b64encode os urandom 16 decode ascii optional but recommended OAUTH2 PROXY GITHUB USERS with GitHub usernames to allow to login INGRESS HOST with a valid FQDN e g foo bar com INGRESS SECRET with a Secret with a valid SSL certificate 4 Deploy the oauth2 proxy and the ingress rules by running console kubectl create f oauth2 proxy yaml Test Test the integration by accessing the configured URL e g https foo bar com Register OAuth2 Application images github auth png GitHub authentication images oauth login png Kubernetes dashboard images dashboard png Example Vouch Proxy Kubernetes Dashboard This example will show you how to deploy Vouch Proxy https github com vouch vouch proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider Prepare 1 Install the kubernetes dashboard console kubectl create f https raw githubusercontent com kubernetes kops master addons kubernetes dashboard v1 10 1 yaml 2 Create a custom GitHub OAuth application https github com settings applications new Register OAuth2 Application images register oauth app png Homepage URL is the FQDN in the Ingress rule like https foo bar com Authorization callback URL is the same as the base FQDN plus oauth2 auth like https foo bar com oauth2 auth Register OAuth2 Application images register oauth app 2 png 3 Configure Vouch Proxy values in the file vouch proxy yaml https raw githubusercontent com kubernetes ingress nginx main docs examples auth oauth external auth vouch proxy yaml with the values VOUCH COOKIE DOMAIN with value of Ingress Host OAUTH CLIENT ID with the github Client ID OAUTH CLIENT SECRET with the github Client Secret optional but recommended VOUCH WHITELIST with GitHub usernames to allow to login INGRESS HOST with a valid FQDN e g foo bar com INGRESS SECRET with a Secret with a valid SSL certificate 4 Deploy Vouch Proxy and the ingress rules by running console kubectl create f vouch proxy yaml Test Test the integration by accessing the configured URL e g https foo bar com Register OAuth2 Application images github auth png GitHub authentication images oauth login png Kubernetes dashboard images dashboard png |
ingress nginx Basic Authentication Create htpasswd file This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd c auth foo It s important the file generated is named actually that the secret has a key otherwise the ingress controller returns a 503 console New password bar | # Basic Authentication
This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with `htpasswd`.
It's important the file generated is named `auth` (actually - that the secret has a key `data.auth`), otherwise the ingress-controller returns a 503.
## Create htpasswd file
```console
$ htpasswd -c auth foo
New password: <bar>
New password:
Re-type new password:
Adding password for user foo
```
## Convert htpasswd into a secret
```console
$ kubectl create secret generic basic-auth --from-file=auth
secret "basic-auth" created
```
## Examine secret
```console
$ kubectl get secret basic-auth -o yaml
apiVersion: v1
data:
auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK
kind: Secret
metadata:
name: basic-auth
namespace: default
type: Opaque
```
## Using kubectl, create an ingress tied to the basic-auth secret
```console
$ echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-with-auth
annotations:
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
spec:
ingressClassName: nginx
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
" | kubectl create -f -
```
## Use curl to confirm authorization is required by the ingress
```
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com'
* Trying 10.2.29.4...
* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)
> GET / HTTP/1.1
> Host: foo.bar.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Server: nginx/1.10.0
< Date: Wed, 11 May 2016 05:27:23 GMT
< Content-Type: text/html
< Content-Length: 195
< Connection: keep-alive
< WWW-Authenticate: Basic realm="Authentication Required - foo"
<
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.10.0</center>
</body>
</html>
* Connection #0 to host 10.2.29.4 left intact
```
## Use curl with the correct credentials to connect to the ingress
```
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar'
* Trying 10.2.29.4...
* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)
* Server auth using Basic with user 'foo'
> GET / HTTP/1.1
> Host: foo.bar.com
> Authorization: Basic Zm9vOmJhcg==
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.10.0
< Date: Wed, 11 May 2016 06:05:26 GMT
< Content-Type: text/plain
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
<
CLIENT VALUES:
client_address=10.2.29.4
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://foo.bar.com:8080/
SERVER VALUES:
server_version=nginx: 1.9.11 - lua: 10001
HEADERS RECEIVED:
accept=*/*
connection=close
host=foo.bar.com
user-agent=curl/7.43.0
x-request-id=e426c7829ef9f3b18d40730857c3eddb
x-forwarded-for=10.2.29.1
x-forwarded-host=foo.bar.com
x-forwarded-port=80
x-forwarded-proto=http
x-real-ip=10.2.29.1
x-scheme=http
BODY:
* Connection #0 to host 10.2.29.4 left intact
-no body in request-
``` | ingress nginx | Basic Authentication This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd It s important the file generated is named auth actually that the secret has a key data auth otherwise the ingress controller returns a 503 Create htpasswd file console htpasswd c auth foo New password bar New password Re type new password Adding password for user foo Convert htpasswd into a secret console kubectl create secret generic basic auth from file auth secret basic auth created Examine secret console kubectl get secret basic auth o yaml apiVersion v1 data auth Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK kind Secret metadata name basic auth namespace default type Opaque Using kubectl create an ingress tied to the basic auth secret console echo apiVersion networking k8s io v1 kind Ingress metadata name ingress with auth annotations type of authentication nginx ingress kubernetes io auth type basic name of the secret that contains the user password definitions nginx ingress kubernetes io auth secret basic auth message to display with an appropriate context why the authentication is required nginx ingress kubernetes io auth realm Authentication Required foo spec ingressClassName nginx rules host foo bar com http paths path pathType Prefix backend service name http svc port number 80 kubectl create f Use curl to confirm authorization is required by the ingress curl v http 10 2 29 4 H Host foo bar com Trying 10 2 29 4 Connected to 10 2 29 4 10 2 29 4 port 80 0 GET HTTP 1 1 Host foo bar com User Agent curl 7 43 0 Accept HTTP 1 1 401 Unauthorized Server nginx 1 10 0 Date Wed 11 May 2016 05 27 23 GMT Content Type text html Content Length 195 Connection keep alive WWW Authenticate Basic realm Authentication Required foo html head title 401 Authorization Required title head body bgcolor white center h1 401 Authorization Required h1 center hr center nginx 1 10 0 center body html Connection 0 to host 10 2 29 4 left intact Use curl with the correct credentials to connect to the ingress curl v http 10 2 29 4 H Host foo bar com u foo bar Trying 10 2 29 4 Connected to 10 2 29 4 10 2 29 4 port 80 0 Server auth using Basic with user foo GET HTTP 1 1 Host foo bar com Authorization Basic Zm9vOmJhcg User Agent curl 7 43 0 Accept HTTP 1 1 200 OK Server nginx 1 10 0 Date Wed 11 May 2016 06 05 26 GMT Content Type text plain Transfer Encoding chunked Connection keep alive Vary Accept Encoding CLIENT VALUES client address 10 2 29 4 command GET real path query nil request version 1 1 request uri http foo bar com 8080 SERVER VALUES server version nginx 1 9 11 lua 10001 HEADERS RECEIVED accept connection close host foo bar com user agent curl 7 43 0 x request id e426c7829ef9f3b18d40730857c3eddb x forwarded for 10 2 29 1 x forwarded host foo bar com x forwarded port 80 x forwarded proto http x real ip 10 2 29 1 x scheme http BODY Connection 0 to host 10 2 29 4 left intact no body in request |
ingress nginx TLS termination This example demonstrates how to terminate TLS through the Ingress Nginx Controller You need a and a for this example Prerequisites Deployment | # TLS termination
This example demonstrates how to terminate TLS through the Ingress-Nginx Controller.
## Prerequisites
You need a [TLS cert](../PREREQUISITES.md#tls-certificates) and a [test HTTP service](../PREREQUISITES.md#test-http-service) for this example.
## Deployment
Create a `ingress.yaml` file.
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-test
spec:
tls:
- hosts:
- foo.bar.com
# This assumes tls-secret exists and the SSL
# certificate contains a CN for foo.bar.com
secretName: tls-secret
ingressClassName: nginx
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
# This assumes http-svc exists and routes to healthy endpoints
service:
name: http-svc
port:
number: 80
```
The following command instructs the controller to terminate traffic using the provided
TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service.
```console
kubectl apply -f ingress.yaml
```
## Validation
You can confirm that the Ingress works.
```console
$ kubectl describe ing nginx-test
Name: nginx-test
Namespace: default
Address: 104.198.183.6
Default backend: default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080)
TLS:
tls-secret terminates
Rules:
Host Path Backends
---- ---- --------
*
http-svc:80 (<none>)
Annotations:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
7s 7s 1 {ingress-nginx-controller } Normal CREATE default/nginx-test
7s 7s 1 {ingress-nginx-controller } Normal UPDATE default/nginx-test
7s 7s 1 {ingress-nginx-controller } Normal CREATE ip: 104.198.183.6
7s 7s 1 {ingress-nginx-controller } Warning MAPPING Ingress rule 'default/nginx-test' contains no path definition. Assuming /
$ curl 104.198.183.6 -L
curl: (60) SSL certificate problem: self signed certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
$ curl 104.198.183.6 -Lk
CLIENT VALUES:
client_address=10.240.0.4
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://35.186.221.137:8080/
SERVER VALUES:
server_version=nginx: 1.9.11 - lua: 10001
HEADERS RECEIVED:
accept=*/*
connection=Keep-Alive
host=35.186.221.137
user-agent=curl/7.46.0
via=1.1 google
x-cloud-trace-context=f708ea7e369d4514fc90d51d7e27e91d/13322322294276298106
x-forwarded-for=104.132.0.80, 35.186.221.137
x-forwarded-proto=https
BODY:
``` | ingress nginx | TLS termination This example demonstrates how to terminate TLS through the Ingress Nginx Controller Prerequisites You need a TLS cert PREREQUISITES md tls certificates and a test HTTP service PREREQUISITES md test http service for this example Deployment Create a ingress yaml file yaml apiVersion networking k8s io v1 kind Ingress metadata name nginx test spec tls hosts foo bar com This assumes tls secret exists and the SSL certificate contains a CN for foo bar com secretName tls secret ingressClassName nginx rules host foo bar com http paths path pathType Prefix backend This assumes http svc exists and routes to healthy endpoints service name http svc port number 80 The following command instructs the controller to terminate traffic using the provided TLS cert and forward un encrypted HTTP traffic to the test HTTP service console kubectl apply f ingress yaml Validation You can confirm that the Ingress works console kubectl describe ing nginx test Name nginx test Namespace default Address 104 198 183 6 Default backend default http backend 80 10 180 0 4 8080 10 240 0 2 8080 TLS tls secret terminates Rules Host Path Backends http svc 80 none Annotations Events FirstSeen LastSeen Count From SubObjectPath Type Reason Message 7s 7s 1 ingress nginx controller Normal CREATE default nginx test 7s 7s 1 ingress nginx controller Normal UPDATE default nginx test 7s 7s 1 ingress nginx controller Normal CREATE ip 104 198 183 6 7s 7s 1 ingress nginx controller Warning MAPPING Ingress rule default nginx test contains no path definition Assuming curl 104 198 183 6 L curl 60 SSL certificate problem self signed certificate More details here http curl haxx se docs sslcerts html curl 104 198 183 6 Lk CLIENT VALUES client address 10 240 0 4 command GET real path query nil request version 1 1 request uri http 35 186 221 137 8080 SERVER VALUES server version nginx 1 9 11 lua 10001 HEADERS RECEIVED accept connection Keep Alive host 35 186 221 137 user agent curl 7 46 0 via 1 1 google x cloud trace context f708ea7e369d4514fc90d51d7e27e91d 13322322294276298106 x forwarded for 104 132 0 80 35 186 221 137 x forwarded proto https BODY |
ingress nginx approvers title Availability zone aware routing editor TBD reviewers aledbf ElvinEfendi authors creation date 2019 08 15 | ---
title: Availability zone aware routing
authors:
- "@ElvinEfendi"
reviewers:
- "@aledbf"
approvers:
- "@aledbf"
editor: TBD
creation-date: 2019-08-15
last-updated: 2019-08-16
status: implementable
---
# Availability zone aware routing
## Table of Contents
<!-- toc -->
- [Availability zone aware routing](#availability-zone-aware-routing)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Proposal](#proposal)
- [Implementation History](#implementation-history)
- [Drawbacks [optional]](#drawbacks-optional)
<!-- /toc -->
## Summary
Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.
## Motivation
When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature.
ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely
that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered
inter-zone traffic and usually costs extra money.
At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing.
According to [https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/](https://web.archive.org/web/20201008160149/https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/) Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.
This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.
Arguably inter-zone network latency should also be better than cross-zone.
### Goals
* Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying
* This should not impact canary feature
* ingress-nginx should be able to operate successfully if there are no zonal endpoints
### Non-Goals
* This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
* This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases
## Proposal
The idea here is to have the controller part of ingress-nginx
(1) detect what zone its current pod is running in and
(2) detect the zone for every endpoint it knows about.
After that, it will post that data as part of endpoints to Lua land.
When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and
if there is no zone-local endpoint then it will fall back to current behavior.
Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.
**How does controller know what zone it runs in?**
We can have the pod spec pass the node name using downward API as an environment variable.
Upon startup, the controller can get node details from the API based on the node name.
Once the node details are obtained
we can extract the zone from the `failure-domain.beta.kubernetes.io/zone` annotation.
Then we can pass that value to Lua land through Nginx configuration
when loading `lua_ingress.lua` module in `init_by_lua` phase.
**How do we extract zones for endpoints?**
We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory.
And when we generate endpoints list, we can access node name using `.subsets.addresses[i].nodeName`
and based on that fetch zone from the map in memory and store it as a field on the endpoint.
__This solution assumes `failure-domain.beta.kubernetes.io/zone`__ annotation does not change until the end of the node's life. Otherwise, we have to
watch update events as well on the nodes and that'll add even more overhead.
Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution
because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start.
From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint.
This means an extra API call in case cluster has expanded.
**How do we make sure we do our best to choose zone-local endpoint?**
This will be done on the Lua side. For every backend, we will initialize two balancer instances:
(1) with all endpoints
(2) with all endpoints corresponding to the current zone for the backend.
Then given the request once we choose what backend
needs to serve the request, we will first try to use a zonal balancer for that backend.
If a zonal balancer does not exist (i.e. there's no zonal endpoint)
then we will use a general balancer.
In case of zonal outages, we assume that the readiness probe will fail and the controller will
see no endpoints for the backend and therefore we will use a general balancer.
We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.
## Implementation History
- initial version of KEP is shipped
- proposal and implementation details are done
## Drawbacks [optional]
More load on the Kubernetes API server. | ingress nginx | title Availability zone aware routing authors ElvinEfendi reviewers aledbf approvers aledbf editor TBD creation date 2019 08 15 last updated 2019 08 16 status implementable Availability zone aware routing Table of Contents toc Availability zone aware routing availability zone aware routing Table of Contents table of contents Summary summary Motivation motivation Goals goals Non Goals non goals Proposal proposal Implementation History implementation history Drawbacks optional drawbacks optional toc Summary Teach ingress nginx about availability zones where endpoints are running in This way ingress nginx pod will do its best to proxy to zone local endpoint Motivation When users run their services across multiple availability zones they usually pay for egress traffic between zones Providers such as GCP and Amazon EC2 usually charge extra for this feature ingress nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one That means it s at least equally likely that it will pick an endpoint from another zone and proxy the request to it In this situation response from the endpoint to the ingress nginx pod is considered inter zone traffic and usually costs extra money At the time of this writing GCP charges 0 01 per GB of inter zone egress traffic according to https cloud google com compute network pricing According to https datapath io resources blog what are aws data transfer costs and how to minimize them https web archive org web 20201008160149 https datapath io resources blog what are aws data transfer costs and how to minimize them Amazon also charges the same amount of money as GCP for cross zone egress traffic This can be a lot of money depending on once s traffic By teaching ingress nginx about zones we can eliminate or at least decrease this cost Arguably inter zone network latency should also be better than cross zone Goals Given a regional cluster running ingress nginx ingress nginx should do best effort to pick a zone local endpoint when proxying This should not impact canary feature ingress nginx should be able to operate successfully if there are no zonal endpoints Non Goals This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress nginx pod s in that zone This feature will be relying on https kubernetes io docs reference kubernetes api labels annotations taints failure domainbetakubernetesiozone it is not this KEP s goal to support other cases Proposal The idea here is to have the controller part of ingress nginx 1 detect what zone its current pod is running in and 2 detect the zone for every endpoint it knows about After that it will post that data as part of endpoints to Lua land When picking an endpoint the Lua balancer will try to pick zone local endpoint first and if there is no zone local endpoint then it will fall back to current behavior Initially this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that How does controller know what zone it runs in We can have the pod spec pass the node name using downward API as an environment variable Upon startup the controller can get node details from the API based on the node name Once the node details are obtained we can extract the zone from the failure domain beta kubernetes io zone annotation Then we can pass that value to Lua land through Nginx configuration when loading lua ingress lua module in init by lua phase How do we extract zones for endpoints We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory And when we generate endpoints list we can access node name using subsets addresses i nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint This solution assumes failure domain beta kubernetes io zone annotation does not change until the end of the node s life Otherwise we have to watch update events as well on the nodes and that ll add even more overhead Alternatively we can get the list of nodes only when there s no node in the memory for the given node name This is probably a better solution because then we would avoid watching for API changes on node resources We can eagerly fetch all the nodes and build node name to zone mapping on start From there on it will sync during endpoint building in the main event loop if there s no existing entry for the node of an endpoint This means an extra API call in case cluster has expanded How do we make sure we do our best to choose zone local endpoint This will be done on the Lua side For every backend we will initialize two balancer instances 1 with all endpoints 2 with all endpoints corresponding to the current zone for the backend Then given the request once we choose what backend needs to serve the request we will first try to use a zonal balancer for that backend If a zonal balancer does not exist i e there s no zonal endpoint then we will use a general balancer In case of zonal outages we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer We can enable the feature using a configmap setting Doing it this way makes it easier to rollback in case of a problem Implementation History initial version of KEP is shipped proposal and implementation details are done Drawbacks optional More load on the Kubernetes API server |
ingress nginx All the controller files should live on a different container Controller container should have bare minimum to work just go program No file other than NGINX files should exist on this container Inside nginx container there should be a really small http listener just able Proposal to split containers This includes not mounting the service account All the NGINX files should live on one container ServiceAccount should be mounted just on controller | # Proposal to split containers
* All the NGINX files should live on one container
* No file other than NGINX files should exist on this container
* This includes not mounting the service account
* All the controller files should live on a different container
* Controller container should have bare minimum to work (just go program)
* ServiceAccount should be mounted just on controller
* Inside nginx container, there should be a really small http listener just able
to start, stop and reload NGINX
## Roadmap (what needs to be done)
* Map what needs to be done to mount the SA just on controller container
* Map all the required files for NGINX to work
* Map all the required network calls between controller and NGINX
* eg.: Dynamic lua reconfiguration
* Map problematic features that will need attention
* SSLPassthrough today happens on controller process and needs to happen on NGINX
### Ports and endpoints on NGINX container
* Public HTTP/HTTPs port - 80 and 443
* Lua configuration port - 10246 (HTTP) and 10247 (Stream)
* 3333 (temp) - Dataplane controller http server
* /reload - (POST) Reloads the configuration.
* "config" argument is the location of temporary file that should be used / moved to nginx.conf
* /test - (POST) Test the configuration of a given file location
* "config" argument is the location of temporary file that should be tested
### Mounting empty SA on controller container
```yaml
kind: Pod
apiVersion: v1
metadata:
name: test
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
- name: othernginx
image: alpine:latest
command: ["/bin/sh"]
args: ["-c", "while true; do date; sleep 3; done"]
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: emptysecret
volumes:
- name: emptysecret
emptyDir:
sizeLimit: 1Mi
```
### Mapped folders on NGINX configuration
**WARNING** We need to be aware of inter mount containers and inode problems. If we
mount a file instead of a directory, it may take time to reflect the file value on
the target container
* "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;"; - Lua scripts
* "/var/log/nginx" - NGINX logs
* "/tmp/nginx (nginx.pid)" - NGINX pid directory / file, fcgi socket, etc
* " /etc/nginx/geoip" - GeoIP database directory - OK - /etc/ingress-controller/geoip
* /etc/nginx/mime.types - Mime types
* /etc/ingress-controller/ssl - SSL directory (fake cert, auth cert)
* /etc/ingress-controller/auth - Authentication files
* /etc/nginx/modsecurity - Modsecurity configuration
* /etc/nginx/owasp-modsecurity-crs - Modsecurity rules
* /etc/nginx/tickets.key - SSL tickets - OK - /etc/ingress-controller/tickets.key
* /etc/nginx/opentelemetry.toml - OTEL config - OK - /etc/ingress-controller/telemetry
* /etc/nginx/opentracing.json - Opentracing config - OK - /etc/ingress-controller/telemetry
* /etc/nginx/modules - NGINX modules
* /etc/nginx/fastcgi_params (maybe) - fcgi params
* /etc/nginx/template - Template, may be used by controller only
##### List of modules
```
ngx_http_auth_digest_module.so ngx_http_modsecurity_module.so
ngx_http_brotli_filter_module.so ngx_http_opentracing_module.so
ngx_http_brotli_static_module.so ngx_stream_geoip2_module.so
ngx_http_geoip2_module.so
```
##### List of files that may be removed
```
-rw-r--r-- 1 www-data www-data 1077 Jun 23 19:44 fastcgi.conf
-rw-r--r-- 1 www-data www-data 1077 Jun 23 19:44 fastcgi.conf.default
-rw-r--r-- 1 www-data www-data 1007 Jun 23 19:44 fastcgi_params
-rw-r--r-- 1 www-data www-data 1007 Jun 23 19:44 fastcgi_params.default
drwxr-xr-x 2 www-data www-data 4096 Jun 23 19:34 geoip
-rw-r--r-- 1 www-data www-data 2837 Jun 23 19:44 koi-utf
-rw-r--r-- 1 www-data www-data 2223 Jun 23 19:44 koi-win
drwxr-xr-x 6 www-data www-data 4096 Sep 19 14:13 lua
-rw-r--r-- 1 www-data www-data 5349 Jun 23 19:44 mime.types
-rw-r--r-- 1 www-data www-data 5349 Jun 23 19:44 mime.types.default
drwxr-xr-x 2 www-data www-data 4096 Jun 23 19:44 modsecurity
drwxr-xr-x 2 www-data www-data 4096 Jun 23 19:44 modules
-rw-r--r-- 1 www-data www-data 18275 Oct 1 21:28 nginx.conf
-rw-r--r-- 1 www-data www-data 2656 Jun 23 19:44 nginx.conf.default
-rwx------ 1 www-data www-data 420 Oct 1 21:28 opentelemetry.toml
-rw-r--r-- 1 www-data www-data 2 Oct 1 21:28 opentracing.json
drwxr-xr-x 7 www-data www-data 4096 Jun 23 19:44 owasp-modsecurity-crs
-rw-r--r-- 1 www-data www-data 636 Jun 23 19:44 scgi_params
-rw-r--r-- 1 www-data www-data 636 Jun 23 19:44 scgi_params.default
drwxr-xr-x 2 www-data www-data 4096 Sep 19 14:13 template
-rw-r--r-- 1 www-data www-data 664 Jun 23 19:44 uwsgi_params
-rw-r--r-- 1 www-data www-data 664 Jun 23 19:44 uwsgi_params.default
-rw-r--r-- 1 www-data www-data 3610 Jun 23 19:44 win-utf
``` | ingress nginx | Proposal to split containers All the NGINX files should live on one container No file other than NGINX files should exist on this container This includes not mounting the service account All the controller files should live on a different container Controller container should have bare minimum to work just go program ServiceAccount should be mounted just on controller Inside nginx container there should be a really small http listener just able to start stop and reload NGINX Roadmap what needs to be done Map what needs to be done to mount the SA just on controller container Map all the required files for NGINX to work Map all the required network calls between controller and NGINX eg Dynamic lua reconfiguration Map problematic features that will need attention SSLPassthrough today happens on controller process and needs to happen on NGINX Ports and endpoints on NGINX container Public HTTP HTTPs port 80 and 443 Lua configuration port 10246 HTTP and 10247 Stream 3333 temp Dataplane controller http server reload POST Reloads the configuration config argument is the location of temporary file that should be used moved to nginx conf test POST Test the configuration of a given file location config argument is the location of temporary file that should be tested Mounting empty SA on controller container yaml kind Pod apiVersion v1 metadata name test spec containers name nginx image nginx latest ports containerPort 80 name othernginx image alpine latest command bin sh args c while true do date sleep 3 done volumeMounts mountPath var run secrets kubernetes io serviceaccount name emptysecret volumes name emptysecret emptyDir sizeLimit 1Mi Mapped folders on NGINX configuration WARNING We need to be aware of inter mount containers and inode problems If we mount a file instead of a directory it may take time to reflect the file value on the target container etc nginx lua lua etc nginx lua vendor lua Lua scripts var log nginx NGINX logs tmp nginx nginx pid NGINX pid directory file fcgi socket etc etc nginx geoip GeoIP database directory OK etc ingress controller geoip etc nginx mime types Mime types etc ingress controller ssl SSL directory fake cert auth cert etc ingress controller auth Authentication files etc nginx modsecurity Modsecurity configuration etc nginx owasp modsecurity crs Modsecurity rules etc nginx tickets key SSL tickets OK etc ingress controller tickets key etc nginx opentelemetry toml OTEL config OK etc ingress controller telemetry etc nginx opentracing json Opentracing config OK etc ingress controller telemetry etc nginx modules NGINX modules etc nginx fastcgi params maybe fcgi params etc nginx template Template may be used by controller only List of modules ngx http auth digest module so ngx http modsecurity module so ngx http brotli filter module so ngx http opentracing module so ngx http brotli static module so ngx stream geoip2 module so ngx http geoip2 module so List of files that may be removed rw r r 1 www data www data 1077 Jun 23 19 44 fastcgi conf rw r r 1 www data www data 1077 Jun 23 19 44 fastcgi conf default rw r r 1 www data www data 1007 Jun 23 19 44 fastcgi params rw r r 1 www data www data 1007 Jun 23 19 44 fastcgi params default drwxr xr x 2 www data www data 4096 Jun 23 19 34 geoip rw r r 1 www data www data 2837 Jun 23 19 44 koi utf rw r r 1 www data www data 2223 Jun 23 19 44 koi win drwxr xr x 6 www data www data 4096 Sep 19 14 13 lua rw r r 1 www data www data 5349 Jun 23 19 44 mime types rw r r 1 www data www data 5349 Jun 23 19 44 mime types default drwxr xr x 2 www data www data 4096 Jun 23 19 44 modsecurity drwxr xr x 2 www data www data 4096 Jun 23 19 44 modules rw r r 1 www data www data 18275 Oct 1 21 28 nginx conf rw r r 1 www data www data 2656 Jun 23 19 44 nginx conf default rwx 1 www data www data 420 Oct 1 21 28 opentelemetry toml rw r r 1 www data www data 2 Oct 1 21 28 opentracing json drwxr xr x 7 www data www data 4096 Jun 23 19 44 owasp modsecurity crs rw r r 1 www data www data 636 Jun 23 19 44 scgi params rw r r 1 www data www data 636 Jun 23 19 44 scgi params default drwxr xr x 2 www data www data 4096 Sep 19 14 13 template rw r r 1 www data www data 664 Jun 23 19 44 uwsgi params rw r r 1 www data www data 664 Jun 23 19 44 uwsgi params default rw r r 1 www data www data 3610 Jun 23 19 44 win utf |
ingress nginx oscardoe janedoe approvers title KEP Template alicedoe reviewers TBD authors | ---
title: KEP Template
authors:
- "@janedoe"
reviewers:
- TBD
- "@alicedoe"
approvers:
- TBD
- "@oscardoe"
editor: TBD
creation-date: yyyy-mm-dd
last-updated: yyyy-mm-dd
status: provisional|implementable|implemented|deferred|rejected|withdrawn|replaced
see-also:
- "/docs/enhancements/20190101-we-heard-you-like-keps.md"
- "/docs/enhancements/20190102-everyone-gets-a-kep.md"
replaces:
- "/docs/enhancements/20181231-replaced-kep.md"
superseded-by:
- "/docs/enhancements/20190104-superseding-kep.md"
---
# Title
This is the title of the KEP.
Keep it simple and descriptive.
A good title can help communicate what the KEP is and should be considered as part of any review.
The title should be lowercased and spaces/punctuation should be replaced with `-`.
To get started with this template:
1. **Make a copy of this template.**
Create a copy of this template and name it `YYYYMMDD-my-title.md`, where `YYYYMMDD` is the date the KEP was first drafted.
1. **Fill out the "overview" sections.**
This includes the Summary and Motivation sections.
These should be easy if you've preflighted the idea of the KEP in an issue.
1. **Create a PR.**
Assign it to folks that are sponsoring this process.
1. **Create an issue**
When filing an enhancement tracking issue, please ensure to complete all fields in the template.
1. **Merge early.**
Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly.
The best way to do this is to just start with the "Overview" sections and fill out details incrementally in follow on PRs.
View anything marked as a `provisional` as a working document and subject to change.
Aim for single topic PRs to keep discussions focused.
If you disagree with what is already in a document, open a new PR with suggested changes.
The canonical place for the latest set of instructions (and the likely source of this file) is [here](YYYYMMDD-kep-template.md).
The `Metadata` section above is intended to support the creation of tooling around the KEP process.
This will be a YAML section that is fenced as a code block.
See the KEP process for details on each of these items.
## Table of Contents
A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.
Ensure the TOC is wrapped with <code><!-- toc --&rt;<!-- /toc --&rt;</code> tags, and then generate with `hack/update-toc.sh`.
<!-- toc -->
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Proposal](#proposal)
- [User Stories [optional]](#user-stories-optional)
- [Story 1](#story-1)
- [Story 2](#story-2)
- [Implementation Details/Notes/Constraints [optional]](#implementation-detailsnotesconstraints-optional)
- [Risks and Mitigations](#risks-and-mitigations)
- [Design Details](#design-details)
- [Test Plan](#test-plan)
- [Removing a deprecated flag](#removing-a-deprecated-flag)
- [Implementation History](#implementation-history)
- [Drawbacks [optional]](#drawbacks-optional)
- [Alternatives [optional]](#alternatives-optional)
<!-- /toc -->
## Summary
The `Summary` section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap.
It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.
A good summary is probably at least a paragraph in length.
## Motivation
This section is for explicitly listing the motivation, goals and non-goals of this KEP.
Describe why the change is important and the benefits to users.
The motivation section can optionally provide links to [experience reports][] to demonstrate the interest in a KEP within the wider Kubernetes community.
[experience reports]: https://github.com/golang/go/wiki/ExperienceReports
### Goals
List the specific goals of the KEP.
How will we know that this has succeeded?
### Non-Goals
What is out of scope for this KEP?
Listing non-goals helps to focus discussion and make progress.
## Proposal
This is where we get down to the nitty gritty of what the proposal actually is.
### User Stories [optional]
Detail the things that people will be able to do if this KEP is implemented.
Include as much detail as possible so that people can understand the "how" of the system.
The goal here is to make this feel real for users without getting bogged down.
#### Story 1
#### Story 2
### Implementation Details/Notes/Constraints [optional]
What are the caveats to the implementation?
What are some important details that didn't come across above.
Go in to as much detail as necessary here.
This might be a good place to talk about core concepts and how they relate.
### Risks and Mitigations
What are the risks of this proposal and how do we mitigate.
Think broadly.
For example, consider both security and how this will impact the larger kubernetes ecosystem.
How will security be reviewed and by whom?
How will UX be reviewed and by whom?
Consider including folks that also work outside project.
## Design Details
### Test Plan
**Note:** *Section not required until targeted at a release.*
Consider the following in developing a test plan for this enhancement:
- Will there be e2e and integration tests, in addition to unit tests?
- How will it be tested in isolation vs with other components?
No need to outline all of the test cases, just the general strategy.
Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.
All code is expected to have adequate tests (eventually with coverage expectations).
Please adhere to the [Kubernetes testing guidelines][testing-guidelines] when drafting this test plan.
[testing-guidelines]: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
#### Removing a deprecated flag
- Announce deprecation and support policy of the existing flag
- Two versions passed since introducing the functionality which deprecates the flag (to address version skew)
- Address feedback on usage/changed behavior, provided on GitHub issues
- Deprecate the flag
## Implementation History
Major milestones in the life cycle of a KEP should be tracked in `Implementation History`.
Major milestones might include
- the `Summary` and `Motivation` sections being merged signaling acceptance
- the `Proposal` section being merged signaling agreement on a proposed design
- the date implementation started
- the first Kubernetes release where an initial version of the KEP was available
- the version of Kubernetes where the KEP graduated to general availability
- when the KEP was retired or superseded
## Drawbacks [optional]
Why should this KEP _not_ be implemented.
## Alternatives [optional]
Similar to the `Drawbacks` section the `Alternatives` section is used to highlight and record other possible approaches to delivering the value proposed by a KEP. | ingress nginx | title KEP Template authors janedoe reviewers TBD alicedoe approvers TBD oscardoe editor TBD creation date yyyy mm dd last updated yyyy mm dd status provisional implementable implemented deferred rejected withdrawn replaced see also docs enhancements 20190101 we heard you like keps md docs enhancements 20190102 everyone gets a kep md replaces docs enhancements 20181231 replaced kep md superseded by docs enhancements 20190104 superseding kep md Title This is the title of the KEP Keep it simple and descriptive A good title can help communicate what the KEP is and should be considered as part of any review The title should be lowercased and spaces punctuation should be replaced with To get started with this template 1 Make a copy of this template Create a copy of this template and name it YYYYMMDD my title md where YYYYMMDD is the date the KEP was first drafted 1 Fill out the overview sections This includes the Summary and Motivation sections These should be easy if you ve preflighted the idea of the KEP in an issue 1 Create a PR Assign it to folks that are sponsoring this process 1 Create an issue When filing an enhancement tracking issue please ensure to complete all fields in the template 1 Merge early Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly The best way to do this is to just start with the Overview sections and fill out details incrementally in follow on PRs View anything marked as a provisional as a working document and subject to change Aim for single topic PRs to keep discussions focused If you disagree with what is already in a document open a new PR with suggested changes The canonical place for the latest set of instructions and the likely source of this file is here YYYYMMDD kep template md The Metadata section above is intended to support the creation of tooling around the KEP process This will be a YAML section that is fenced as a code block See the KEP process for details on each of these items Table of Contents A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template Ensure the TOC is wrapped with code lt toc rt lt toc rt code tags and then generate with hack update toc sh toc Summary summary Motivation motivation Goals goals Non Goals non goals Proposal proposal User Stories optional user stories optional Story 1 story 1 Story 2 story 2 Implementation Details Notes Constraints optional implementation detailsnotesconstraints optional Risks and Mitigations risks and mitigations Design Details design details Test Plan test plan Removing a deprecated flag removing a deprecated flag Implementation History implementation history Drawbacks optional drawbacks optional Alternatives optional alternatives optional toc Summary The Summary section is incredibly important for producing high quality user focused documentation such as release notes or a development roadmap It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself A good summary is probably at least a paragraph in length Motivation This section is for explicitly listing the motivation goals and non goals of this KEP Describe why the change is important and the benefits to users The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community experience reports https github com golang go wiki ExperienceReports Goals List the specific goals of the KEP How will we know that this has succeeded Non Goals What is out of scope for this KEP Listing non goals helps to focus discussion and make progress Proposal This is where we get down to the nitty gritty of what the proposal actually is User Stories optional Detail the things that people will be able to do if this KEP is implemented Include as much detail as possible so that people can understand the how of the system The goal here is to make this feel real for users without getting bogged down Story 1 Story 2 Implementation Details Notes Constraints optional What are the caveats to the implementation What are some important details that didn t come across above Go in to as much detail as necessary here This might be a good place to talk about core concepts and how they relate Risks and Mitigations What are the risks of this proposal and how do we mitigate Think broadly For example consider both security and how this will impact the larger kubernetes ecosystem How will security be reviewed and by whom How will UX be reviewed and by whom Consider including folks that also work outside project Design Details Test Plan Note Section not required until targeted at a release Consider the following in developing a test plan for this enhancement Will there be e2e and integration tests in addition to unit tests How will it be tested in isolation vs with other components No need to outline all of the test cases just the general strategy Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out All code is expected to have adequate tests eventually with coverage expectations Please adhere to the Kubernetes testing guidelines testing guidelines when drafting this test plan testing guidelines https git k8s io community contributors devel sig testing testing md Removing a deprecated flag Announce deprecation and support policy of the existing flag Two versions passed since introducing the functionality which deprecates the flag to address version skew Address feedback on usage changed behavior provided on GitHub issues Deprecate the flag Implementation History Major milestones in the life cycle of a KEP should be tracked in Implementation History Major milestones might include the Summary and Motivation sections being merged signaling acceptance the Proposal section being merged signaling agreement on a proposed design the date implementation started the first Kubernetes release where an initial version of the KEP was available the version of Kubernetes where the KEP graduated to general availability when the KEP was retired or superseded Drawbacks optional Why should this KEP not be implemented Alternatives optional Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP |
ingress nginx Ingress NGINX Code Overview Core Golang code This document provides an overview of Ingress NGINX code This part of the code is responsible for the main logic of Ingress NGINX It contains all the logics that parses watches Endpoints and turn them into usable nginx conf configuration | # Ingress NGINX - Code Overview
This document provides an overview of Ingress NGINX code.
## Core Golang code
This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses [Ingress Objects](https://kubernetes.io/docs/concepts/services-networking/ingress/),
[annotations](https://kubernetes.io/docs/reference/glossary/?fundamental=true#term-annotation), watches Endpoints and turn them into usable nginx.conf configuration.
### Core Sync Logics:
Ingress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copies of that:
1. One copy is the currently running configuration model
2. Second copy is the one generated in response to some changes in the cluster
The sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one.
There are static and dynamic configuration changes.
All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua.
---
The following parts of the code can be found:
### Entrypoint
The `main` package is responsible for starting ingress-nginx program, which can be found in [cmd/nginx](https://github.com/kubernetes/ingress-nginx/tree/main/cmd/nginx) directory.
### Version
Is the package of the code responsible for adding `version` subcommand, and can be found in [version](https://github.com/kubernetes/ingress-nginx/tree/main/version) directory.
### Internal code
This part of the code contains the internal logics that compose Ingress NGINX Controller, and it's split into:
#### Admission Controller
Contains the code of [Kubernetes Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) which validates the syntax of ingress objects before accepting it.
This code can be found in [internal/admission/controller](https://github.com/kubernetes/ingress-nginx/tree/main/internal/admission/controller) directory.
#### File functions
Contains auxiliary codes that deal with files, such as generating the SHA1 checksum of a file, or creating required directories.
This code can be found in [internal/file](https://github.com/kubernetes/ingress-nginx/blob/main/internal/file) directory.
#### Ingress functions
Contains all the logics from Ingress-Nginx Controller, with some examples being:
* Expected Golang structures that will be used in templates and other parts of the code - [internal/ingress/types.go](https://github.com/kubernetes/ingress-nginx/blob/main/internal/ingress/types.go).
* supported annotations and its parsing logics - [internal/ingress/annotations](https://github.com/kubernetes/ingress-nginx/tree/main/internal/ingress/annotations).
* reconciliation loops and logics - [internal/ingress/controller](https://github.com/kubernetes/ingress-nginx/tree/main/internal/ingress/controller)
* defaults - define the default struct - [internal/ingress/defaults](https://github.com/kubernetes/ingress-nginx/tree/main/internal/ingress/defaults).
* Error interface and types implementation - [internal/ingress/errors](https://github.com/kubernetes/ingress-nginx/tree/main/internal/ingress/errors)
* Metrics collectors for Prometheus exporting - [internal/ingress/metric](https://github.com/kubernetes/ingress-nginx/tree/main/internal/ingress/metric).
* Resolver - Extracts information from a controller - [internal/ingress/resolver](https://github.com/kubernetes/ingress-nginx/tree/main/internal/ingress/resolver).
* Ingress Object status publisher - [internal/ingress/status](https://github.com/kubernetes/ingress-nginx/tree/main/internal/ingress/status).
And other parts of the code that will be written in this document in a future.
#### K8s functions
Contains helper functions for parsing Kubernetes objects.
This part of the code can be found in [internal/k8s](https://github.com/kubernetes/ingress-nginx/tree/main/internal/k8s) directory.
#### Networking functions
Contains helper functions for networking, such as IPv4 and IPv6 parsing, SSL certificate parsing, etc.
This part of the code can be found in [internal/net](https://github.com/kubernetes/ingress-nginx/tree/main/internal/net) directory.
#### NGINX functions
Contains helper function to deal with NGINX, such as verify if it's running and reading it's configuration file parts.
This part of the code can be found in [internal/nginx](https://github.com/kubernetes/ingress-nginx/tree/main/internal/nginx) directory.
#### Tasks / Queue
Contains the functions responsible for the sync queue part of the controller.
This part of the code can be found in [internal/task](https://github.com/kubernetes/ingress-nginx/tree/main/internal/task) directory.
#### Other parts of internal
Other parts of internal code might not be covered here, like runtime and watch but they can be added in a future.
## E2E Test
The e2e tests code is in [test](https://github.com/kubernetes/ingress-nginx/tree/main/test) directory.
## Other programs
Describe here `kubectl plugin`, `dbg`, `waitshutdown` and cover the hack scripts.
### kubectl plugin
It contains kubectl plugin for inspecting your ingress-nginx deployments.
This part of code can be found in [cmd/plugin](https://github.com/kubernetes/ingress-nginx/tree/main/cmd/plugin) directory
Detail functions flow and available flow can be found in [kubectl-plugin](https://github.com/kubernetes/ingress-nginx/blob/main/docs/kubectl-plugin.md)
## Deploy files
This directory contains the `yaml` deploy files used as examples or references in the docs to deploy Ingress NGINX and other components.
Those files are in [deploy](https://github.com/kubernetes/ingress-nginx/tree/main/deploy) directory.
## Helm Chart
Used to generate the Helm chart published.
Code is in [charts/ingress-nginx](https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx).
## Documentation/Website
The documentation used to generate the website https://kubernetes.github.io/ingress-nginx/
This code is available in [docs](https://github.com/kubernetes/ingress-nginx/tree/main/docs) and it's main "language" is `Markdown`, used by [mkdocs](https://github.com/kubernetes/ingress-nginx/blob/main/mkdocs.yml) file to generate static pages.
## Container Images
Container images used to run ingress-nginx, or to build the final image.
### Base Images
Contains the `Dockerfiles` and scripts used to build base images that are used in other parts of the repo. They are present in [images](https://github.com/kubernetes/ingress-nginx/tree/main/images) repo. Some examples:
* [nginx](https://github.com/kubernetes/ingress-nginx/tree/main/images/nginx) - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date.
* [custom-error-pages](https://github.com/kubernetes/ingress-nginx/tree/main/images/custom-error-pages) - Used on the custom error page examples.
There are other images inside this directory.
### Ingress Controller Image
The image used to build the final ingress controller, used in deploy scripts and Helm charts.
This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system.
The files are in [rootfs](https://github.com/kubernetes/ingress-nginx/tree/main/rootfs) directory and contains:
* The Dockerfile
* [nginx config](https://github.com/kubernetes/ingress-nginx/tree/main/rootfs/etc/nginx)
#### Ingress NGINX Lua Scripts
Ingress NGINX uses Lua Scripts to enable features like hot reloading, rate limiting and monitoring. Some are written using the [OpenResty](https://openresty.org/en/) helper.
The directory containing Lua scripts is [rootfs/etc/nginx/lua](https://github.com/kubernetes/ingress-nginx/tree/main/rootfs/etc/nginx/lua).
#### Nginx Go template file
One of the functions of Ingress NGINX is to turn [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) objects into nginx.conf file.
To do so, the final step is to apply those configurations in [nginx.tmpl](https://github.com/kubernetes/ingress-nginx/tree/main/rootfs/etc/nginx/template) turning it into a final nginx.conf file.
| ingress nginx | Ingress NGINX Code Overview This document provides an overview of Ingress NGINX code Core Golang code This part of the code is responsible for the main logic of Ingress NGINX It contains all the logics that parses Ingress Objects https kubernetes io docs concepts services networking ingress annotations https kubernetes io docs reference glossary fundamental true term annotation watches Endpoints and turn them into usable nginx conf configuration Core Sync Logics Ingress nginx has an internal model of the ingresses secrets and endpoints in a given cluster It maintains two copies of that 1 One copy is the currently running configuration model 2 Second copy is the one generated in response to some changes in the cluster The sync logic diffs the two models and if there s a change it tries to converge the running configuration to the new one There are static and dynamic configuration changes All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua The following parts of the code can be found Entrypoint The main package is responsible for starting ingress nginx program which can be found in cmd nginx https github com kubernetes ingress nginx tree main cmd nginx directory Version Is the package of the code responsible for adding version subcommand and can be found in version https github com kubernetes ingress nginx tree main version directory Internal code This part of the code contains the internal logics that compose Ingress NGINX Controller and it s split into Admission Controller Contains the code of Kubernetes Admission Controller https kubernetes io docs reference access authn authz admission controllers which validates the syntax of ingress objects before accepting it This code can be found in internal admission controller https github com kubernetes ingress nginx tree main internal admission controller directory File functions Contains auxiliary codes that deal with files such as generating the SHA1 checksum of a file or creating required directories This code can be found in internal file https github com kubernetes ingress nginx blob main internal file directory Ingress functions Contains all the logics from Ingress Nginx Controller with some examples being Expected Golang structures that will be used in templates and other parts of the code internal ingress types go https github com kubernetes ingress nginx blob main internal ingress types go supported annotations and its parsing logics internal ingress annotations https github com kubernetes ingress nginx tree main internal ingress annotations reconciliation loops and logics internal ingress controller https github com kubernetes ingress nginx tree main internal ingress controller defaults define the default struct internal ingress defaults https github com kubernetes ingress nginx tree main internal ingress defaults Error interface and types implementation internal ingress errors https github com kubernetes ingress nginx tree main internal ingress errors Metrics collectors for Prometheus exporting internal ingress metric https github com kubernetes ingress nginx tree main internal ingress metric Resolver Extracts information from a controller internal ingress resolver https github com kubernetes ingress nginx tree main internal ingress resolver Ingress Object status publisher internal ingress status https github com kubernetes ingress nginx tree main internal ingress status And other parts of the code that will be written in this document in a future K8s functions Contains helper functions for parsing Kubernetes objects This part of the code can be found in internal k8s https github com kubernetes ingress nginx tree main internal k8s directory Networking functions Contains helper functions for networking such as IPv4 and IPv6 parsing SSL certificate parsing etc This part of the code can be found in internal net https github com kubernetes ingress nginx tree main internal net directory NGINX functions Contains helper function to deal with NGINX such as verify if it s running and reading it s configuration file parts This part of the code can be found in internal nginx https github com kubernetes ingress nginx tree main internal nginx directory Tasks Queue Contains the functions responsible for the sync queue part of the controller This part of the code can be found in internal task https github com kubernetes ingress nginx tree main internal task directory Other parts of internal Other parts of internal code might not be covered here like runtime and watch but they can be added in a future E2E Test The e2e tests code is in test https github com kubernetes ingress nginx tree main test directory Other programs Describe here kubectl plugin dbg waitshutdown and cover the hack scripts kubectl plugin It contains kubectl plugin for inspecting your ingress nginx deployments This part of code can be found in cmd plugin https github com kubernetes ingress nginx tree main cmd plugin directory Detail functions flow and available flow can be found in kubectl plugin https github com kubernetes ingress nginx blob main docs kubectl plugin md Deploy files This directory contains the yaml deploy files used as examples or references in the docs to deploy Ingress NGINX and other components Those files are in deploy https github com kubernetes ingress nginx tree main deploy directory Helm Chart Used to generate the Helm chart published Code is in charts ingress nginx https github com kubernetes ingress nginx tree main charts ingress nginx Documentation Website The documentation used to generate the website https kubernetes github io ingress nginx This code is available in docs https github com kubernetes ingress nginx tree main docs and it s main language is Markdown used by mkdocs https github com kubernetes ingress nginx blob main mkdocs yml file to generate static pages Container Images Container images used to run ingress nginx or to build the final image Base Images Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo They are present in images https github com kubernetes ingress nginx tree main images repo Some examples nginx https github com kubernetes ingress nginx tree main images nginx The base NGINX image ingress nginx uses is not a vanilla NGINX It bundles many libraries together and it is a job in itself to maintain that and keep things up to date custom error pages https github com kubernetes ingress nginx tree main images custom error pages Used on the custom error page examples There are other images inside this directory Ingress Controller Image The image used to build the final ingress controller used in deploy scripts and Helm charts This is NGINX with some Lua enhancement We do dynamic certificate endpoints handling canary traffic split custom load balancing etc at this component One can also add new functionalities using Lua plugin system The files are in rootfs https github com kubernetes ingress nginx tree main rootfs directory and contains The Dockerfile nginx config https github com kubernetes ingress nginx tree main rootfs etc nginx Ingress NGINX Lua Scripts Ingress NGINX uses Lua Scripts to enable features like hot reloading rate limiting and monitoring Some are written using the OpenResty https openresty org en helper The directory containing Lua scripts is rootfs etc nginx lua https github com kubernetes ingress nginx tree main rootfs etc nginx lua Nginx Go template file One of the functions of Ingress NGINX is to turn Ingress https kubernetes io docs concepts services networking ingress objects into nginx conf file To do so the final step is to apply those configurations in nginx tmpl https github com kubernetes ingress nginx tree main rootfs etc nginx template turning it into a final nginx conf file |
ingress nginx that are needed to work with the Kubernetes ingress resource here is a link to the Developing for Ingress Nginx Controller http request termination of connection reverseproxy etc etc you can skip this and move on to the sections below For the really new contributors who want to contribute to the INGRESS NGINX project but need help with understanding some basic concepts This guide contains tips on how a http https request travels from a browser or a curl command to the webserver process running inside a container in a pod in a Kubernetes cluster but enters the cluster via a ingress resource For those who are familiar with those basic networking concepts like routing of a packet with regards to a This document explains how to get started with developing for Ingress Nginx Controller | Developing for Ingress-Nginx Controller
This document explains how to get started with developing for Ingress-Nginx Controller.
For the really new contributors, who want to contribute to the INGRESS-NGINX project, but need help with understanding some basic concepts,
that are needed to work with the Kubernetes ingress resource, here is a link to the [New Contributors Guide](https://github.com/kubernetes/ingress-nginx/blob/main/NEW_CONTRIBUTOR.md).
This guide contains tips on how a http/https request travels, from a browser or a curl command,
to the webserver process running inside a container, in a pod, in a Kubernetes cluster, but enters the cluster via a ingress resource.
For those who are familiar with those basic networking concepts like routing of a packet with regards to a
http request, termination of connection, reverseproxy etc. etc., you can skip this and move on to the sections below.
(or read it anyways just for context and also provide feedbacks if any)
## Prerequisites
Install [Go 1.14](https://golang.org/dl/) or later.
!!! note
The project uses [Go Modules](https://github.com/golang/go/wiki/Modules)
Install [Docker](https://docs.docker.com/engine/install/) (v19.03.0 or later with experimental feature on)
Install [kubectl](https://kubernetes.io/docs/tasks/tools/) (1.24.0 or higher)
Install [Kind](https://kind.sigs.k8s.io/)
!!! important
The majority of make tasks run as docker containers
## Quick Start
1. Fork the repository
2. Clone the repository to any location in your work station
3. Add a `GO111MODULE` environment variable with `export GO111MODULE=on`
4. Run `go mod download` to install dependencies
### Local build
Start a local Kubernetes cluster using [kind](https://kind.sigs.k8s.io/), build and deploy the ingress controller
```console
make dev-env
```
- If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the [documentation for kind](https://kind.sigs.k8s.io/docs/user/configuration/#a-note-on-cli-parameters-and-configuration-files), and look for how to set a custom image for the kind node (image: kindest/node...), in the kind config file.
### Testing
**Run go unit tests**
```console
make test
```
**Run unit-tests for lua code**
```console
make lua-test
```
Lua tests are located in the directory `rootfs/etc/nginx/lua/test`
!!! important
Test files must follow the naming convention `<mytest>_test.lua` or it will be ignored
**Run e2e test suite**
```console
make kind-e2e-test
```
To limit the scope of the tests to execute, we can use the environment variable `FOCUS`
```console
FOCUS="no-auth-locations" make kind-e2e-test
```
!!! note
The variable `FOCUS` defines Ginkgo [Focused Specs](https://onsi.github.io/ginkgo/#focused-specs)
Valid values are defined in the describe definition of the e2e tests like [Default Backend](https://github.com/kubernetes/ingress-nginx/blob/main/test/e2e/defaultbackend/default_backend.go#L29)
The complete list of tests can be found [here](../e2e-tests.md)
### Custom docker image
In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location.
This can be done setting two environment variables, `REGISTRY` and `TAG`
```console
export TAG="dev"
export REGISTRY="$USER"
make build image
```
and then publish such version with
```console
docker push $REGISTRY/controller:$TAG
``` | ingress nginx | Developing for Ingress Nginx Controller This document explains how to get started with developing for Ingress Nginx Controller For the really new contributors who want to contribute to the INGRESS NGINX project but need help with understanding some basic concepts that are needed to work with the Kubernetes ingress resource here is a link to the New Contributors Guide https github com kubernetes ingress nginx blob main NEW CONTRIBUTOR md This guide contains tips on how a http https request travels from a browser or a curl command to the webserver process running inside a container in a pod in a Kubernetes cluster but enters the cluster via a ingress resource For those who are familiar with those basic networking concepts like routing of a packet with regards to a http request termination of connection reverseproxy etc etc you can skip this and move on to the sections below or read it anyways just for context and also provide feedbacks if any Prerequisites Install Go 1 14 https golang org dl or later note The project uses Go Modules https github com golang go wiki Modules Install Docker https docs docker com engine install v19 03 0 or later with experimental feature on Install kubectl https kubernetes io docs tasks tools 1 24 0 or higher Install Kind https kind sigs k8s io important The majority of make tasks run as docker containers Quick Start 1 Fork the repository 2 Clone the repository to any location in your work station 3 Add a GO111MODULE environment variable with export GO111MODULE on 4 Run go mod download to install dependencies Local build Start a local Kubernetes cluster using kind https kind sigs k8s io build and deploy the ingress controller console make dev env If you are working on the v1 x x version of this controller and you want to create a cluster with kubernetes version 1 22 then please visit the documentation for kind https kind sigs k8s io docs user configuration a note on cli parameters and configuration files and look for how to set a custom image for the kind node image kindest node in the kind config file Testing Run go unit tests console make test Run unit tests for lua code console make lua test Lua tests are located in the directory rootfs etc nginx lua test important Test files must follow the naming convention mytest test lua or it will be ignored Run e2e test suite console make kind e2e test To limit the scope of the tests to execute we can use the environment variable FOCUS console FOCUS no auth locations make kind e2e test note The variable FOCUS defines Ginkgo Focused Specs https onsi github io ginkgo focused specs Valid values are defined in the describe definition of the e2e tests like Default Backend https github com kubernetes ingress nginx blob main test e2e defaultbackend default backend go L29 The complete list of tests can be found here e2e tests md Custom docker image In some cases it can be useful to build a docker image and publish such an image to a private or custom registry location This can be done setting two environment variables REGISTRY and TAG console export TAG dev export REGISTRY USER make build image and then publish such version with console docker push REGISTRY controller TAG |
istio Shows how to do health checking for Istio services help ops app health check docs ops app health check help ops setup app health check aliases title Health Checking of Istio Services docs tasks traffic management app health check docs ops security health checks and mtls weight 50 | ---
title: Health Checking of Istio Services
description: Shows how to do health checking for Istio services.
weight: 50
aliases:
- /docs/tasks/traffic-management/app-health-check/
- /docs/ops/security/health-checks-and-mtls/
- /help/ops/setup/app-health-check
- /help/ops/app-health-check
- /docs/ops/app-health-check
- /docs/ops/setup/app-health-check
keywords: [security,health-check]
owner: istio/wg-user-experience-maintainers
test: yes
---
[Kubernetes liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
describes several ways to configure liveness and readiness probes:
1. [Command](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command)
1. [HTTP request](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request)
1. [TCP probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe)
1. [gRPC probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe)
The command approach works with no changes required, but HTTP requests, TCP probes, and gRPC probes require Istio to make changes to the pod configuration.
The health check requests to the `liveness-http` service are sent by Kubelet.
This becomes a problem when mutual TLS is enabled, because the Kubelet does not have an Istio issued certificate.
Therefore the health check requests will fail.
TCP probe checks need special handling, because Istio redirects all incoming traffic into the sidecar, and so all TCP ports appear open. The Kubelet simply checks if some process is listening on the specified port, and so the probe will always succeed as long as the sidecar is running.
Istio solves both these problems by rewriting the application `PodSpec` readiness/liveness probe,
so that the probe request is sent to the [sidecar agent](/docs/reference/commands/pilot-agent/).
## Liveness probe rewrite example
To demonstrate how the readiness/liveness probe is rewritten at the application `PodSpec` level, let us use the [liveness-http-same-port sample](/samples/health-check/liveness-http-same-port.yaml).
First create and label a namespace for the example:
$ kubectl create namespace istio-io-health-rewrite
$ kubectl label namespace istio-io-health-rewrite istio-injection=enabled
And deploy the sample application:
$ kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: liveness-http
namespace: istio-io-health-rewrite
spec:
selector:
matchLabels:
app: liveness-http
version: v1
template:
metadata:
labels:
app: liveness-http
version: v1
spec:
containers:
- name: liveness-http
image: docker.io/istio/health:example
ports:
- containerPort: 8001
livenessProbe:
httpGet:
path: /foo
port: 8001
initialDelaySeconds: 5
periodSeconds: 5
EOF
Once deployed, you can inspect the pod's application container to see the changed path:
$ kubectl get pod "$LIVENESS_POD" -n istio-io-health-rewrite -o json | jq '.spec.containers[0].livenessProbe.httpGet'
{
"path": "/app-health/liveness-http/livez",
"port": 15020,
"scheme": "HTTP"
}
The original `livenessProbe` path is now mapped against the new path in the sidecar container environment variable `ISTIO_KUBE_APP_PROBERS`:
$ kubectl get pod "$LIVENESS_POD" -n istio-io-health-rewrite -o=jsonpath="{.spec.containers[1].env[?(@.name=='ISTIO_KUBE_APP_PROBERS')]}"
{
"name":"ISTIO_KUBE_APP_PROBERS",
"value":"{\"/app-health/liveness-http/livez\":{\"httpGet\":{\"path\":\"/foo\",\"port\":8001,\"scheme\":\"HTTP\"},\"timeoutSeconds\":1}}"
}
For HTTP and gRPC requests, the sidecar agent redirects the request to the application and strips the response body, only returning the response code. For TCP probes, the sidecar agent will then do the port check while avoiding the traffic redirection.
The rewriting of problematic probes is enabled by default in all built-in Istio
[configuration profiles](/docs/setup/additional-setup/config-profiles/) but can be disabled as described below.
## Liveness and readiness probes using the command approach
Istio provides a [liveness sample](/samples/health-check/liveness-command.yaml) that
implements this approach. To demonstrate it working with mutual TLS enabled,
first create a namespace for the example:
$ kubectl create ns istio-io-health
To configure strict mutual TLS, run:
$ kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: "default"
namespace: "istio-io-health"
spec:
mtls:
mode: STRICT
EOF
Next, change directory to the root of the Istio installation and run the following command to deploy the sample service:
$ kubectl -n istio-io-health apply -f <(istioctl kube-inject -f @samples/health-check/liveness-command.yaml@)
To confirm that the liveness probes are working, check the status of the sample pod to verify that it is running.
$ kubectl -n istio-io-health get pod
NAME READY STATUS RESTARTS AGE
liveness-6857c8775f-zdv9r 2/2 Running 0 4m
## Liveness and readiness probes using the HTTP, TCP, and gRPC approach {#liveness-and-readiness-probes-using-the-http-request-approach}
As stated previously, Istio uses probe rewrite to implement HTTP, TCP, and gRPC probes by default. You can disable this
feature either for specific pods, or globally.
### Disable the probe rewrite for a pod {#disable-the-http-probe-rewrite-for-a-pod}
You can [annotate the pod](/docs/reference/config/annotations/) with `sidecar.istio.io/rewriteAppHTTPProbers: "false"`
to disable the probe rewrite option. Make sure you add the annotation to the
[pod resource](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) because it will be ignored
anywhere else (for example, on an enclosing deployment resource).
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: liveness-http
spec:
selector:
matchLabels:
app: liveness-http
version: v1
template:
metadata:
labels:
app: liveness-http
version: v1
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "false"
spec:
containers:
- name: liveness-http
image: docker.io/istio/health:example
ports:
- containerPort: 8001
livenessProbe:
httpGet:
path: /foo
port: 8001
initialDelaySeconds: 5
periodSeconds: 5
EOF
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: liveness-grpc
spec:
selector:
matchLabels:
app: liveness-grpc
version: v1
template:
metadata:
labels:
app: liveness-grpc
version: v1
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "false"
spec:
containers:
- name: etcd
image: registry.k8s.io/etcd:3.5.1-0
command: ["--listen-client-urls", "http://0.0.0.0:2379", "--advertise-client-urls", "http://127.0.0.1:2379", "--log-level", "debug"]
ports:
- containerPort: 2379
livenessProbe:
grpc:
port: 2379
initialDelaySeconds: 10
periodSeconds: 5
EOF
This approach allows you to disable the health check probe rewrite gradually on individual deployments,
without reinstalling Istio.
### Disable the probe rewrite globally
[Install Istio](/docs/setup/install/istioctl/) using `--set values.sidecarInjectorWebhook.rewriteAppHTTPProbe=false`
to disable the probe rewrite globally. **Alternatively**, update the configuration map for the Istio sidecar injector:
$ kubectl get cm istio-sidecar-injector -n istio-system -o yaml | sed -e 's/"rewriteAppHTTPProbe": true/"rewriteAppHTTPProbe": false/' | kubectl apply -f -
## Cleanup
Remove the namespaces used for the examples:
$ kubectl delete ns istio-io-health istio-io-health-rewrite
| istio | title Health Checking of Istio Services description Shows how to do health checking for Istio services weight 50 aliases docs tasks traffic management app health check docs ops security health checks and mtls help ops setup app health check help ops app health check docs ops app health check docs ops setup app health check keywords security health check owner istio wg user experience maintainers test yes Kubernetes liveness and readiness probes https kubernetes io docs tasks configure pod container configure liveness readiness probes describes several ways to configure liveness and readiness probes 1 Command https kubernetes io docs tasks configure pod container configure liveness readiness startup probes define a liveness command 1 HTTP request https kubernetes io docs tasks configure pod container configure liveness readiness startup probes define a liveness http request 1 TCP probe https kubernetes io docs tasks configure pod container configure liveness readiness startup probes define a tcp liveness probe 1 gRPC probe https kubernetes io docs tasks configure pod container configure liveness readiness startup probes define a grpc liveness probe The command approach works with no changes required but HTTP requests TCP probes and gRPC probes require Istio to make changes to the pod configuration The health check requests to the liveness http service are sent by Kubelet This becomes a problem when mutual TLS is enabled because the Kubelet does not have an Istio issued certificate Therefore the health check requests will fail TCP probe checks need special handling because Istio redirects all incoming traffic into the sidecar and so all TCP ports appear open The Kubelet simply checks if some process is listening on the specified port and so the probe will always succeed as long as the sidecar is running Istio solves both these problems by rewriting the application PodSpec readiness liveness probe so that the probe request is sent to the sidecar agent docs reference commands pilot agent Liveness probe rewrite example To demonstrate how the readiness liveness probe is rewritten at the application PodSpec level let us use the liveness http same port sample samples health check liveness http same port yaml First create and label a namespace for the example kubectl create namespace istio io health rewrite kubectl label namespace istio io health rewrite istio injection enabled And deploy the sample application kubectl apply f EOF apiVersion apps v1 kind Deployment metadata name liveness http namespace istio io health rewrite spec selector matchLabels app liveness http version v1 template metadata labels app liveness http version v1 spec containers name liveness http image docker io istio health example ports containerPort 8001 livenessProbe httpGet path foo port 8001 initialDelaySeconds 5 periodSeconds 5 EOF Once deployed you can inspect the pod s application container to see the changed path kubectl get pod LIVENESS POD n istio io health rewrite o json jq spec containers 0 livenessProbe httpGet path app health liveness http livez port 15020 scheme HTTP The original livenessProbe path is now mapped against the new path in the sidecar container environment variable ISTIO KUBE APP PROBERS kubectl get pod LIVENESS POD n istio io health rewrite o jsonpath spec containers 1 env name ISTIO KUBE APP PROBERS name ISTIO KUBE APP PROBERS value app health liveness http livez httpGet path foo port 8001 scheme HTTP timeoutSeconds 1 For HTTP and gRPC requests the sidecar agent redirects the request to the application and strips the response body only returning the response code For TCP probes the sidecar agent will then do the port check while avoiding the traffic redirection The rewriting of problematic probes is enabled by default in all built in Istio configuration profiles docs setup additional setup config profiles but can be disabled as described below Liveness and readiness probes using the command approach Istio provides a liveness sample samples health check liveness command yaml that implements this approach To demonstrate it working with mutual TLS enabled first create a namespace for the example kubectl create ns istio io health To configure strict mutual TLS run kubectl apply f EOF apiVersion security istio io v1 kind PeerAuthentication metadata name default namespace istio io health spec mtls mode STRICT EOF Next change directory to the root of the Istio installation and run the following command to deploy the sample service kubectl n istio io health apply f istioctl kube inject f samples health check liveness command yaml To confirm that the liveness probes are working check the status of the sample pod to verify that it is running kubectl n istio io health get pod NAME READY STATUS RESTARTS AGE liveness 6857c8775f zdv9r 2 2 Running 0 4m Liveness and readiness probes using the HTTP TCP and gRPC approach liveness and readiness probes using the http request approach As stated previously Istio uses probe rewrite to implement HTTP TCP and gRPC probes by default You can disable this feature either for specific pods or globally Disable the probe rewrite for a pod disable the http probe rewrite for a pod You can annotate the pod docs reference config annotations with sidecar istio io rewriteAppHTTPProbers false to disable the probe rewrite option Make sure you add the annotation to the pod resource https kubernetes io docs concepts workloads pods pod overview because it will be ignored anywhere else for example on an enclosing deployment resource kubectl apply f EOF apiVersion apps v1 kind Deployment metadata name liveness http spec selector matchLabels app liveness http version v1 template metadata labels app liveness http version v1 annotations sidecar istio io rewriteAppHTTPProbers false spec containers name liveness http image docker io istio health example ports containerPort 8001 livenessProbe httpGet path foo port 8001 initialDelaySeconds 5 periodSeconds 5 EOF kubectl apply f EOF apiVersion apps v1 kind Deployment metadata name liveness grpc spec selector matchLabels app liveness grpc version v1 template metadata labels app liveness grpc version v1 annotations sidecar istio io rewriteAppHTTPProbers false spec containers name etcd image registry k8s io etcd 3 5 1 0 command listen client urls http 0 0 0 0 2379 advertise client urls http 127 0 0 1 2379 log level debug ports containerPort 2379 livenessProbe grpc port 2379 initialDelaySeconds 10 periodSeconds 5 EOF This approach allows you to disable the health check probe rewrite gradually on individual deployments without reinstalling Istio Disable the probe rewrite globally Install Istio docs setup install istioctl using set values sidecarInjectorWebhook rewriteAppHTTPProbe false to disable the probe rewrite globally Alternatively update the configuration map for the Istio sidecar injector kubectl get cm istio sidecar injector n istio system o yaml sed e s rewriteAppHTTPProbe true rewriteAppHTTPProbe false kubectl apply f Cleanup Remove the namespaces used for the examples kubectl delete ns istio io health istio io health rewrite |
istio test no owner istio wg networking maintainers keywords scalability title Configuration Scoping In order to program the service mesh the Istio control plane Istiod reads a variety of configurations including core Kubernetes types like and Shows how to scope configuration in Istio for operational and performance benefits weight 60 | ---
title: Configuration Scoping
description: Shows how to scope configuration in Istio, for operational and performance benefits.
weight: 60
keywords: [scalability]
owner: istio/wg-networking-maintainers
test: no
---
In order to program the service mesh, the Istio control plane (Istiod) reads a variety of configurations, including core Kubernetes types like `Service` and `Node`,
and Istio's own types like `Gateway`.
These are then sent to the data plane (see [Architecture](/docs/ops/deployment/architecture/) for more information).
By default, the control plane will read all configuration in all namespaces.
Each proxy instance will receive configuration for all namespaces as well.
This includes information about workloads that are not enrolled in the mesh.
This default ensures correct behavior out of the box, but comes with a scalability cost.
Each configuration has a cost (in CPU and memory, primarily) to maintain and keep up to date.
At large scales, it is critical to limit the configuration scope to avoid excessive resource consumption.
## Scoping mechanisms
Istio offers a few tools to help control the scope of a configuration to meet different use cases.
Depending on your requirements, these can be used alone or together.
* `Sidecar` provides a mechanism for specific workloads to _import_ a set of configurations
* `exportTo` provides a mechanism to _export_ a configuration to a set of workloads
* `discoverySelectors` provides a mechanism to let Istio completely ignore a set of configurations
### `Sidecar` import
The [`egress.hosts`](/docs/reference/config/networking/sidecar/#IstioEgressListener) field in `Sidecar`
allows specifying a list of configurations to import.
Only configurations matching the specified criteria will be seen by sidecars impacted by the `Sidecar` resource.
For example:
apiVersion: networking.istio.io/v1
kind: Sidecar
metadata:
name: default
spec:
egress:
- hosts:
- "./*" # Import all configuration from our own namespace
- "bookinfo/*" # Import all configuration from the bookinfo namespace
- "external-services/example.com" # Import only 'example.com' from the external-services namespace
### `exportTo`
Istio's `VirtualService`, `DestinationRule`, and `ServiceEntry` provide a `spec.exportTo` field.
Similarly, `Service` can be configured with the `networking.istio.io/exportTo` annotation.
Unlike `Sidecar` which allows a workload owner to control what dependencies it has, `exportTo` works in the opposite way, and allows the service owners to control
their own service's visibility.
For example, this configuration makes the `details` `Service` only visible to its own namespace, and the `client` namespace:
apiVersion: v1
kind: Service
metadata:
name: details
annotations:
networking.istio.io/exportTo: ".,client"
spec: ...
### `DiscoverySelectors`
While the previous controls operate on a workload or service owner level, [`DiscoverySelectors`](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig) provides mesh wide control over configuration visibility.
Discovery selectors allows specifying criteria for which namespaces should be visible to the control plane.
Any namespaces not matching are ignored by the control plane entirely.
This can be configured as part of `meshConfig` during installation. For example:
meshConfig:
discoverySelectors:
- matchLabels:
# Allow any namespaces with `istio-discovery=enabled`
istio-discovery: enabled
- matchLabels:
# Allow "kube-system"; Kubernetes automatically adds this label to each namespace
kubernetes.io/metadata.name: kube-system
Istiod will always open a watch to Kubernetes for all namespaces.
However, discovery selectors will ignore objects that are not selected very early in its processing, minimizing costs.
## Frequently asked questions
### How can I understand the cost of a certain configuration?
In order to get the best return-on-investment for scoping down configuration, it can be helpful to understand the cost of each object.
Unfortunately, there is not a straightforward answer; scalability depends on a large number of factors.
However, there are a few general guidelines:
Configuration *changes* are expensive in Istio, as they require recomputation.
While `Endpoints` changes (generally from a Pod scaling up or down) are heavily optimized, most other configurations are fairly expensive.
This can be especially harmful when controllers are constantly making changes to an object (sometimes this happens accidentally!).
Some tools to detect which configurations are changing:
* Istiod will log each change like: `Push debounce stable 1 for config Gateway/default/gateway: ..., full=true`.
This shows a `Gateway` object in the `default` namespace changed. `full=false` would represent and optimized update such as `Endpoint`.
Note: changes to `Service` and `Endpoints` will all show as `ServiceEntry`.
* Istiod exposes metrics `pilot_k8s_cfg_events` and `pilot_k8s_reg_events` for each change.
* `kubectl get <resource> --watch -oyaml --show-managed-fields` can show changes to an object (or objects) to help understand what is changing, and by whom.
[Headless services](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) (besides ones declared as [HTTP](/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection))
scale with the number of instances. This makes large headless services expensive, and a good candidate for exclusion with `exportTo` or equivalent.
### What happens if I connect to a service outside of my scope?
When connecting to a service that has been excluded through one of the scoping mechanisms, the data plane will not know anything about the destination,
so it will be treated as [Unmatched traffic](/docs/ops/configuration/traffic-management/traffic-routing/#unmatched-traffic).
### What about Gateways?
While [Gateways](/docs/setup/additional-setup/gateway/) will respect `exportTo` and `DiscoverySelectors`, `Sidecar` objects do not impact Gateways.
However, unlike sidecars, gateways do not have configuration for the entire cluster by default.
Instead, each configuration is explicitly attached to the gateway, which mostly avoids this problem.
However, [currently](https://github.com/istio/istio/issues/29131) part of the data plane configuration (a "cluster", in Envoy terms), is always sent for
the entire cluster, even if it is not referenced explicitly. | istio | title Configuration Scoping description Shows how to scope configuration in Istio for operational and performance benefits weight 60 keywords scalability owner istio wg networking maintainers test no In order to program the service mesh the Istio control plane Istiod reads a variety of configurations including core Kubernetes types like Service and Node and Istio s own types like Gateway These are then sent to the data plane see Architecture docs ops deployment architecture for more information By default the control plane will read all configuration in all namespaces Each proxy instance will receive configuration for all namespaces as well This includes information about workloads that are not enrolled in the mesh This default ensures correct behavior out of the box but comes with a scalability cost Each configuration has a cost in CPU and memory primarily to maintain and keep up to date At large scales it is critical to limit the configuration scope to avoid excessive resource consumption Scoping mechanisms Istio offers a few tools to help control the scope of a configuration to meet different use cases Depending on your requirements these can be used alone or together Sidecar provides a mechanism for specific workloads to import a set of configurations exportTo provides a mechanism to export a configuration to a set of workloads discoverySelectors provides a mechanism to let Istio completely ignore a set of configurations Sidecar import The egress hosts docs reference config networking sidecar IstioEgressListener field in Sidecar allows specifying a list of configurations to import Only configurations matching the specified criteria will be seen by sidecars impacted by the Sidecar resource For example apiVersion networking istio io v1 kind Sidecar metadata name default spec egress hosts Import all configuration from our own namespace bookinfo Import all configuration from the bookinfo namespace external services example com Import only example com from the external services namespace exportTo Istio s VirtualService DestinationRule and ServiceEntry provide a spec exportTo field Similarly Service can be configured with the networking istio io exportTo annotation Unlike Sidecar which allows a workload owner to control what dependencies it has exportTo works in the opposite way and allows the service owners to control their own service s visibility For example this configuration makes the details Service only visible to its own namespace and the client namespace apiVersion v1 kind Service metadata name details annotations networking istio io exportTo client spec DiscoverySelectors While the previous controls operate on a workload or service owner level DiscoverySelectors docs reference config istio mesh v1alpha1 MeshConfig provides mesh wide control over configuration visibility Discovery selectors allows specifying criteria for which namespaces should be visible to the control plane Any namespaces not matching are ignored by the control plane entirely This can be configured as part of meshConfig during installation For example meshConfig discoverySelectors matchLabels Allow any namespaces with istio discovery enabled istio discovery enabled matchLabels Allow kube system Kubernetes automatically adds this label to each namespace kubernetes io metadata name kube system Istiod will always open a watch to Kubernetes for all namespaces However discovery selectors will ignore objects that are not selected very early in its processing minimizing costs Frequently asked questions How can I understand the cost of a certain configuration In order to get the best return on investment for scoping down configuration it can be helpful to understand the cost of each object Unfortunately there is not a straightforward answer scalability depends on a large number of factors However there are a few general guidelines Configuration changes are expensive in Istio as they require recomputation While Endpoints changes generally from a Pod scaling up or down are heavily optimized most other configurations are fairly expensive This can be especially harmful when controllers are constantly making changes to an object sometimes this happens accidentally Some tools to detect which configurations are changing Istiod will log each change like Push debounce stable 1 for config Gateway default gateway full true This shows a Gateway object in the default namespace changed full false would represent and optimized update such as Endpoint Note changes to Service and Endpoints will all show as ServiceEntry Istiod exposes metrics pilot k8s cfg events and pilot k8s reg events for each change kubectl get resource watch oyaml show managed fields can show changes to an object or objects to help understand what is changing and by whom Headless services https kubernetes io docs concepts services networking service headless services besides ones declared as HTTP docs ops configuration traffic management protocol selection explicit protocol selection scale with the number of instances This makes large headless services expensive and a good candidate for exclusion with exportTo or equivalent What happens if I connect to a service outside of my scope When connecting to a service that has been excluded through one of the scoping mechanisms the data plane will not know anything about the destination so it will be treated as Unmatched traffic docs ops configuration traffic management traffic routing unmatched traffic What about Gateways While Gateways docs setup additional setup gateway will respect exportTo and DiscoverySelectors Sidecar objects do not impact Gateways However unlike sidecars gateways do not have configuration for the entire cluster by default Instead each configuration is explicitly attached to the gateway which mostly avoids this problem However currently https github com istio istio issues 29131 part of the data plane configuration a cluster in Envoy terms is always sent for the entire cluster even if it is not referenced explicitly |
istio Shows common examples of using Istio security policy title Security policy examples owner istio wg security maintainers test yes weight 60 Background | ---
title: Security policy examples
description: Shows common examples of using Istio security policy.
weight: 60
owner: istio/wg-security-maintainers
test: yes
---
## Background
This page shows common patterns of using Istio security policies. You may find them useful in your deployment or use this
as a quick reference to example policies.
The policies demonstrated here are just examples and require changes to adapt to your actual environment
before applying.
Also read the [authentication](/docs/tasks/security/authentication/authn-policy) and
[authorization](/docs/tasks/security/authorization) tasks for a hands-on tutorial of using the security policy in
more detail.
## Require different JWT issuer per host
JWT validation is common on the ingress gateway and you may want to require different JWT issuers for different
hosts. You can use the authorization policy for fine grained JWT validation in addition to the
[request authentication](/docs/tasks/security/authentication/authn-policy/#end-user-authentication) policy.
Use the following policy if you want to allow access to the given hosts if JWT principal matches. Access to other hosts
will always be denied.
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: jwt-per-host
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
action: ALLOW
rules:
- from:
- source:
# the JWT token must have issuer with suffix "@example.com"
requestPrincipals: ["*@example.com"]
to:
- operation:
hosts: ["example.com", "*.example.com"]
- from:
- source:
# the JWT token must have issuer with suffix "@another.org"
requestPrincipals: ["*@another.org"]
to:
- operation:
hosts: [".another.org", "*.another.org"]
## Namespace isolation
The following two policies enable strict mTLS on namespace `foo`, and allow traffic from the same namespace.
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: default
namespace: foo
spec:
mtls:
mode: STRICT
---
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: foo-isolation
namespace: foo
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["foo"]
## Namespace isolation with ingress exception
The following two policies enable strict mTLS on namespace `foo`, and allow traffic from the same namespace and also
from the ingress gateway.
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: default
namespace: foo
spec:
mtls:
mode: STRICT
---
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: ns-isolation-except-ingress
namespace: foo
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["foo"]
- source:
principals: ["cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"]
## Require mTLS in authorization layer (defense in depth)
You have configured `PeerAuthentication` to `STRICT` but want to make sure the traffic is indeed protected by mTLS with
an extra check in the authorization layer, i.e., defense in depth.
The following policy denies the request if the principal is empty. The principal will be empty if plain text is used.
In other words, the policy allows requests if the principal is non-empty.
`"*"` means non-empty match and using with `notPrincipals` means matching on empty principal.
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: require-mtls
namespace: foo
spec:
action: DENY
rules:
- from:
- source:
notPrincipals: ["*"]
## Require mandatory authorization check with `DENY` policy
You can use the `DENY` policy if you want to require mandatory authorization check that must be satisfied and cannot be
bypassed by another more permissive `ALLOW` policy. This works because the `DENY` policy takes precedence over the
`ALLOW` policy and could deny a request early before `ALLOW` policies.
Use the following policy to enforce mandatory JWT validation in addition to the [request authentication](/docs/tasks/security/authentication/authn-policy/#end-user-authentication) policy.
The policy denies the request if the request principal is empty. The request principal will be empty if JWT validation failed.
In other words, the policy allows requests if the request principal is non-empty.
`"*"` means non-empty match and using with `notRequestPrincipals` means matching on empty request principal.
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: require-jwt
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
action: DENY
rules:
- from:
- source:
notRequestPrincipals: ["*"]
Similarly, Use the following policy to require mandatory namespace isolation and also allow requests from ingress gateway.
The policy denies the request if the namespace is not `foo` and the principal is not `cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account`.
In other words, the policy allows the request only if the namespace is `foo` or the principal is `cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account`.
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: ns-isolation-except-ingress
namespace: foo
spec:
action: DENY
rules:
- from:
- source:
notNamespaces: ["foo"]
notPrincipals: ["cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"]
| istio | title Security policy examples description Shows common examples of using Istio security policy weight 60 owner istio wg security maintainers test yes Background This page shows common patterns of using Istio security policies You may find them useful in your deployment or use this as a quick reference to example policies The policies demonstrated here are just examples and require changes to adapt to your actual environment before applying Also read the authentication docs tasks security authentication authn policy and authorization docs tasks security authorization tasks for a hands on tutorial of using the security policy in more detail Require different JWT issuer per host JWT validation is common on the ingress gateway and you may want to require different JWT issuers for different hosts You can use the authorization policy for fine grained JWT validation in addition to the request authentication docs tasks security authentication authn policy end user authentication policy Use the following policy if you want to allow access to the given hosts if JWT principal matches Access to other hosts will always be denied apiVersion security istio io v1 kind AuthorizationPolicy metadata name jwt per host namespace istio system spec selector matchLabels istio ingressgateway action ALLOW rules from source the JWT token must have issuer with suffix example com requestPrincipals example com to operation hosts example com example com from source the JWT token must have issuer with suffix another org requestPrincipals another org to operation hosts another org another org Namespace isolation The following two policies enable strict mTLS on namespace foo and allow traffic from the same namespace apiVersion security istio io v1 kind PeerAuthentication metadata name default namespace foo spec mtls mode STRICT apiVersion security istio io v1 kind AuthorizationPolicy metadata name foo isolation namespace foo spec action ALLOW rules from source namespaces foo Namespace isolation with ingress exception The following two policies enable strict mTLS on namespace foo and allow traffic from the same namespace and also from the ingress gateway apiVersion security istio io v1 kind PeerAuthentication metadata name default namespace foo spec mtls mode STRICT apiVersion security istio io v1 kind AuthorizationPolicy metadata name ns isolation except ingress namespace foo spec action ALLOW rules from source namespaces foo source principals cluster local ns istio system sa istio ingressgateway service account Require mTLS in authorization layer defense in depth You have configured PeerAuthentication to STRICT but want to make sure the traffic is indeed protected by mTLS with an extra check in the authorization layer i e defense in depth The following policy denies the request if the principal is empty The principal will be empty if plain text is used In other words the policy allows requests if the principal is non empty means non empty match and using with notPrincipals means matching on empty principal apiVersion security istio io v1 kind AuthorizationPolicy metadata name require mtls namespace foo spec action DENY rules from source notPrincipals Require mandatory authorization check with DENY policy You can use the DENY policy if you want to require mandatory authorization check that must be satisfied and cannot be bypassed by another more permissive ALLOW policy This works because the DENY policy takes precedence over the ALLOW policy and could deny a request early before ALLOW policies Use the following policy to enforce mandatory JWT validation in addition to the request authentication docs tasks security authentication authn policy end user authentication policy The policy denies the request if the request principal is empty The request principal will be empty if JWT validation failed In other words the policy allows requests if the request principal is non empty means non empty match and using with notRequestPrincipals means matching on empty request principal apiVersion security istio io v1 kind AuthorizationPolicy metadata name require jwt namespace istio system spec selector matchLabels istio ingressgateway action DENY rules from source notRequestPrincipals Similarly Use the following policy to require mandatory namespace isolation and also allow requests from ingress gateway The policy denies the request if the namespace is not foo and the principal is not cluster local ns istio system sa istio ingressgateway service account In other words the policy allows the request only if the namespace is foo or the principal is cluster local ns istio system sa istio ingressgateway service account apiVersion security istio io v1 kind AuthorizationPolicy metadata name ns isolation except ingress namespace foo spec action DENY rules from source notNamespaces foo notPrincipals cluster local ns istio system sa istio ingressgateway service account |
istio docs ops telemetry monitoring multicluster prometheus owner istio wg policies and telemetry maintainers test no aliases help ops telemetry monitoring multicluster prometheus title Monitoring Multicluster Istio with Prometheus weight 10 Configure Prometheus to monitor multicluster Istio | ---
title: Monitoring Multicluster Istio with Prometheus
description: Configure Prometheus to monitor multicluster Istio.
weight: 10
aliases:
- /help/ops/telemetry/monitoring-multicluster-prometheus
- /docs/ops/telemetry/monitoring-multicluster-prometheus
owner: istio/wg-policies-and-telemetry-maintainers
test: no
---
## Overview
This guide is meant to provide operational guidance on how to configure monitoring of Istio meshes comprised of two
or more individual Kubernetes clusters. It is not meant to establish the *only* possible path forward, but rather
to demonstrate a workable approach to multicluster telemetry with Prometheus.
Our recommendation for multicluster monitoring of Istio with Prometheus is built upon the foundation of Prometheus
[hierarchical federation](https://prometheus.io/docs/prometheus/latest/federation/#hierarchical-federation).
Prometheus instances that are deployed locally to each cluster by Istio act as initial collectors that then federate up
to a production mesh-wide Prometheus instance. That mesh-wide Prometheus can either live outside of the mesh (external), or in one
of the clusters within the mesh.
## Multicluster Istio setup
Follow the [multicluster installation](/docs/setup/install/multicluster/) section to set up your Istio clusters in one of the
supported [multicluster deployment models](/docs/ops/deployment/deployment-models/#multiple-clusters). For the purpose of
this guide, any of those approaches will work, with the following caveat:
**Ensure that a cluster-local Istio Prometheus instance is installed in each cluster.**
Individual Istio deployment of Prometheus in each cluster is required to form the basis of cross-cluster monitoring by
way of federation to a production-ready instance of Prometheus that runs externally or in one of the clusters.
Validate that you have an instance of Prometheus running in each cluster:
$ kubectl -n istio-system get services prometheus
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus ClusterIP 10.8.4.109 <none> 9090/TCP 20h
## Configure Prometheus federation
### External production Prometheus
There are several reasons why you may want to have a Prometheus instance running outside of your Istio deployment.
Perhaps you want long-term monitoring disjoint from the cluster being monitored. Perhaps you want to monitor multiple
separate meshes in a single place. Or maybe you have other motivations. Whatever your reason is, you’ll need some special
configurations to make it all work.
This guide demonstrates connectivity to cluster-local Prometheus instances, but does not address security considerations.
For production use, secure access to each Prometheus endpoint with HTTPS. In addition, take precautions, such as using an
internal load-balancer instead of a public endpoint and the appropriate configuration of firewall rules.
Istio provides a way to expose cluster services externally via [Gateways](/docs/reference/config/networking/gateway/).
You can configure an ingress gateway for the cluster-local Prometheus, providing external connectivity to the in-cluster
Prometheus endpoint.
For each cluster, follow the appropriate instructions from the [Remotely Accessing Telemetry Addons](/docs/tasks/observability/gateways/#option-1-secure-access-https) task.
Also note that you **SHOULD** establish secure (HTTPS) access.
Next, configure your external Prometheus instance to access the cluster-local Prometheus instances using a configuration
like the following (replacing the ingress domain and cluster name):
scrape_configs:
- job_name: 'federate-'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job="kubernetes-pods"}'
static_configs:
- targets:
- 'prometheus.'
labels:
cluster: ''
Notes:
* `CLUSTER_NAME` should be set to the same value that you used to create the cluster (set via `values.global.multiCluster.clusterName`).
* No authentication to the Prometheus endpoint(s) is provided. This means that anyone can query your
cluster-local Prometheus instances. This may not be desirable.
* Without proper HTTPS configuration of the gateway, everything is being transported via plaintext. This may not be
desirable.
### Production Prometheus on an in-mesh cluster
If you prefer to run the production Prometheus in one of the clusters, you need to establish connectivity from it to
the other cluster-local Prometheus instances in the mesh.
This is really just a variation of the configuration for external federation. In this case the configuration on the
cluster running the production Prometheus is different from the configuration for remote cluster Prometheus scraping.
Configure your production Prometheus to access both of the *local* and *remote* Prometheus instances.
First execute the following command:
$ kubectl -n istio-system edit cm prometheus -o yaml
Then add configurations for the *remote* clusters (replacing the ingress domain and cluster name for each cluster) and
add one configuration for the *local* cluster:
scrape_configs:
- job_name: 'federate-'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job="kubernetes-pods"}'
static_configs:
- targets:
- 'prometheus.'
labels:
cluster: ''
- job_name: 'federate-local'
honor_labels: true
metrics_path: '/federate'
metric_relabel_configs:
- replacement: ''
target_label: cluster
kubernetes_sd_configs:
- role: pod
namespaces:
names: ['istio-system']
params:
'match[]':
- '{__name__=~"istio_(.*)"}'
- '{__name__=~"pilot(.*)"}'
| istio | title Monitoring Multicluster Istio with Prometheus description Configure Prometheus to monitor multicluster Istio weight 10 aliases help ops telemetry monitoring multicluster prometheus docs ops telemetry monitoring multicluster prometheus owner istio wg policies and telemetry maintainers test no Overview This guide is meant to provide operational guidance on how to configure monitoring of Istio meshes comprised of two or more individual Kubernetes clusters It is not meant to establish the only possible path forward but rather to demonstrate a workable approach to multicluster telemetry with Prometheus Our recommendation for multicluster monitoring of Istio with Prometheus is built upon the foundation of Prometheus hierarchical federation https prometheus io docs prometheus latest federation hierarchical federation Prometheus instances that are deployed locally to each cluster by Istio act as initial collectors that then federate up to a production mesh wide Prometheus instance That mesh wide Prometheus can either live outside of the mesh external or in one of the clusters within the mesh Multicluster Istio setup Follow the multicluster installation docs setup install multicluster section to set up your Istio clusters in one of the supported multicluster deployment models docs ops deployment deployment models multiple clusters For the purpose of this guide any of those approaches will work with the following caveat Ensure that a cluster local Istio Prometheus instance is installed in each cluster Individual Istio deployment of Prometheus in each cluster is required to form the basis of cross cluster monitoring by way of federation to a production ready instance of Prometheus that runs externally or in one of the clusters Validate that you have an instance of Prometheus running in each cluster kubectl n istio system get services prometheus NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE prometheus ClusterIP 10 8 4 109 none 9090 TCP 20h Configure Prometheus federation External production Prometheus There are several reasons why you may want to have a Prometheus instance running outside of your Istio deployment Perhaps you want long term monitoring disjoint from the cluster being monitored Perhaps you want to monitor multiple separate meshes in a single place Or maybe you have other motivations Whatever your reason is you ll need some special configurations to make it all work This guide demonstrates connectivity to cluster local Prometheus instances but does not address security considerations For production use secure access to each Prometheus endpoint with HTTPS In addition take precautions such as using an internal load balancer instead of a public endpoint and the appropriate configuration of firewall rules Istio provides a way to expose cluster services externally via Gateways docs reference config networking gateway You can configure an ingress gateway for the cluster local Prometheus providing external connectivity to the in cluster Prometheus endpoint For each cluster follow the appropriate instructions from the Remotely Accessing Telemetry Addons docs tasks observability gateways option 1 secure access https task Also note that you SHOULD establish secure HTTPS access Next configure your external Prometheus instance to access the cluster local Prometheus instances using a configuration like the following replacing the ingress domain and cluster name scrape configs job name federate scrape interval 15s honor labels true metrics path federate params match job kubernetes pods static configs targets prometheus labels cluster Notes CLUSTER NAME should be set to the same value that you used to create the cluster set via values global multiCluster clusterName No authentication to the Prometheus endpoint s is provided This means that anyone can query your cluster local Prometheus instances This may not be desirable Without proper HTTPS configuration of the gateway everything is being transported via plaintext This may not be desirable Production Prometheus on an in mesh cluster If you prefer to run the production Prometheus in one of the clusters you need to establish connectivity from it to the other cluster local Prometheus instances in the mesh This is really just a variation of the configuration for external federation In this case the configuration on the cluster running the production Prometheus is different from the configuration for remote cluster Prometheus scraping Configure your production Prometheus to access both of the local and remote Prometheus instances First execute the following command kubectl n istio system edit cm prometheus o yaml Then add configurations for the remote clusters replacing the ingress domain and cluster name for each cluster and add one configuration for the local cluster scrape configs job name federate scrape interval 15s honor labels true metrics path federate params match job kubernetes pods static configs targets prometheus labels cluster job name federate local honor labels true metrics path federate metric relabel configs replacement target label cluster kubernetes sd configs role pod namespaces names istio system params match name istio name pilot |
istio owner istio wg networking maintainers weight 30 linktitle TLS Configuration keywords traffic management proxy How to configure TLS settings to secure network traffic test n a title Understanding TLS Configuration | ---
title: Understanding TLS Configuration
linktitle: TLS Configuration
description: How to configure TLS settings to secure network traffic.
weight: 30
keywords: [traffic-management,proxy]
owner: istio/wg-networking-maintainers
test: n/a
---
One of Istio's most important features is the ability to lock down and secure network traffic to, from,
and within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration.
This document attempts to explain the various connections involved when sending requests in Istio and how
their associated TLS settings are configured.
Refer to [TLS configuration mistakes](/docs/ops/common-problems/network-issues/#tls-configuration-mistakes)
for a summary of some the most common TLS configuration problems.
## Sidecars
Sidecar traffic has a variety of associated connections. Let's break them down one at a time.
1. **External inbound traffic**
This is traffic coming from an outside client that is captured by the sidecar.
If the client is inside the mesh, this traffic may be encrypted with Istio mutual TLS.
By default, the sidecar will be configured to accept both mTLS and non-mTLS traffic, known as `PERMISSIVE` mode.
The mode can alternatively be configured to `STRICT`, where traffic must be mTLS, or `DISABLE`, where traffic must be plaintext.
The mTLS mode is configured using a [`PeerAuthentication` resource](/docs/reference/config/security/peer_authentication/).
1. **Local inbound traffic**
This is traffic going to your application service, from the sidecar. This traffic will always be forwarded as-is.
Note that this does not mean it's always plaintext; the sidecar may pass a TLS connection through.
It just means that a new TLS connection will never be originated from the sidecar.
1. **Local outbound traffic**
This is outgoing traffic from your application service that is intercepted by the sidecar.
Your application may be sending plaintext or TLS traffic.
If [automatic protocol selection](/docs/ops/configuration/traffic-management/protocol-selection/#automatic-protocol-selection)
is enabled, Istio will automatically detect the protocol. Otherwise you should use the port name in the destination service to
[manually specify the protocol](/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection).
1. **External outbound traffic**
This is traffic leaving the sidecar to some external destination. Traffic can be forwarded as is, or a TLS connection can
be initiated (mTLS or standard TLS). This is controlled using the TLS mode setting in the `trafficPolicy` of a
[`DestinationRule` resource](/docs/reference/config/networking/destination-rule/).
A mode setting of `DISABLE` will send plaintext, while `SIMPLE`, `MUTUAL`, and `ISTIO_MUTUAL` will originate a TLS connection.
The key takeaways are:
- `PeerAuthentication` is used to configure what type of mTLS traffic the sidecar will accept.
- `DestinationRule` is used to configure what type of TLS traffic the sidecar will send.
- Port names, or automatic protocol selection, determines which protocol the sidecar will parse traffic as.
## Auto mTLS
As described above, a `DestinationRule` controls whether outgoing traffic uses mTLS or not.
However, configuring this for every workload can be tedious. Typically, you want Istio to always use mTLS
wherever possible, and only send plaintext to workloads that are not part of the mesh (i.e., ones without sidecars).
Istio makes this easy with a feature called "Auto mTLS". Auto mTLS works by doing exactly that. If TLS settings are
not explicitly configured in a `DestinationRule`, the sidecar will automatically determine if
[Istio mutual TLS](/about/faq/#difference-between-mutual-and-istio-mutual) should be sent.
This means that without any configuration, all inter-mesh traffic will be mTLS encrypted.
## Gateways
Any given request to a gateway will have two connections.
1. The inbound request, initiated by some client such as `curl` or a web browser. This is often called the "downstream" connection.
1. The outbound request, initiated by the gateway to some backend. This is often called the "upstream" connection.
Both of these connections have independent TLS configurations.
Note that the configuration of ingress and egress gateways are identical.
The `istio-ingress-gateway` and `istio-egress-gateway` are just two specialized gateway deployments.
The difference is that the client of an ingress gateway is running outside of the mesh while in the case of an egress gateway,
the destination is outside of the mesh.
### Inbound
As part of the inbound request, the gateway must decode the traffic in order to apply routing rules.
This is done based on the server configuration in a [`Gateway` resource](/docs/reference/config/networking/gateway/).
For example, if an inbound connection is plaintext HTTP, the port protocol is configured as `HTTP`:
apiVersion: networking.istio.io/v1
kind: Gateway
...
servers:
- port:
number: 80
name: http
protocol: HTTP
Similarly, for raw TCP traffic, the protocol would be set to `TCP`.
For TLS connections, there are a few more options:
1. What protocol is encapsulated?
If the connection is HTTPS, the server protocol should be configured as `HTTPS`.
Otherwise, for a raw TCP connection encapsulated with TLS, the protocol should be set to `TLS`.
1. Is the TLS connection terminated or passed through?
For passthrough traffic, configure the TLS mode field to `PASSTHROUGH`:
apiVersion: networking.istio.io/v1
kind: Gateway
...
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
In this mode, Istio will route based on SNI information and forward the connection as-is to the destination.
1. Should mutual TLS be used?
Mutual TLS can be configured through the TLS mode `MUTUAL`. When this is configured, a client certificate will be
requested and verified against the configured `caCertificates` or `credentialName`:
apiVersion: networking.istio.io/v1
kind: Gateway
...
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: MUTUAL
caCertificates: ...
### Outbound
While the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls
what type of traffic the gateway will send. This is configured by the TLS settings in a `DestinationRule`,
just like external outbound traffic from [sidecars](#sidecars), or [auto mTLS](#auto-mtls) by default.
The only difference is that you should be careful to consider the `Gateway` settings when configuring this.
For example, if the `Gateway` is configured with TLS `PASSTHROUGH` while the `DestinationRule` configures TLS origination,
you will end up with [double encryption](/docs/ops/common-problems/network-issues/#double-tls).
This works, but is often not the desired behavior.
A `VirtualService` bound to the gateway needs care as well to
[ensure it is consistent](/docs/ops/common-problems/network-issues/#gateway-mismatch)
with the `Gateway` definition. | istio | title Understanding TLS Configuration linktitle TLS Configuration description How to configure TLS settings to secure network traffic weight 30 keywords traffic management proxy owner istio wg networking maintainers test n a One of Istio s most important features is the ability to lock down and secure network traffic to from and within the mesh However configuring TLS settings can be confusing and a common source of misconfiguration This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured Refer to TLS configuration mistakes docs ops common problems network issues tls configuration mistakes for a summary of some the most common TLS configuration problems Sidecars Sidecar traffic has a variety of associated connections Let s break them down one at a time 1 External inbound traffic This is traffic coming from an outside client that is captured by the sidecar If the client is inside the mesh this traffic may be encrypted with Istio mutual TLS By default the sidecar will be configured to accept both mTLS and non mTLS traffic known as PERMISSIVE mode The mode can alternatively be configured to STRICT where traffic must be mTLS or DISABLE where traffic must be plaintext The mTLS mode is configured using a PeerAuthentication resource docs reference config security peer authentication 1 Local inbound traffic This is traffic going to your application service from the sidecar This traffic will always be forwarded as is Note that this does not mean it s always plaintext the sidecar may pass a TLS connection through It just means that a new TLS connection will never be originated from the sidecar 1 Local outbound traffic This is outgoing traffic from your application service that is intercepted by the sidecar Your application may be sending plaintext or TLS traffic If automatic protocol selection docs ops configuration traffic management protocol selection automatic protocol selection is enabled Istio will automatically detect the protocol Otherwise you should use the port name in the destination service to manually specify the protocol docs ops configuration traffic management protocol selection explicit protocol selection 1 External outbound traffic This is traffic leaving the sidecar to some external destination Traffic can be forwarded as is or a TLS connection can be initiated mTLS or standard TLS This is controlled using the TLS mode setting in the trafficPolicy of a DestinationRule resource docs reference config networking destination rule A mode setting of DISABLE will send plaintext while SIMPLE MUTUAL and ISTIO MUTUAL will originate a TLS connection The key takeaways are PeerAuthentication is used to configure what type of mTLS traffic the sidecar will accept DestinationRule is used to configure what type of TLS traffic the sidecar will send Port names or automatic protocol selection determines which protocol the sidecar will parse traffic as Auto mTLS As described above a DestinationRule controls whether outgoing traffic uses mTLS or not However configuring this for every workload can be tedious Typically you want Istio to always use mTLS wherever possible and only send plaintext to workloads that are not part of the mesh i e ones without sidecars Istio makes this easy with a feature called Auto mTLS Auto mTLS works by doing exactly that If TLS settings are not explicitly configured in a DestinationRule the sidecar will automatically determine if Istio mutual TLS about faq difference between mutual and istio mutual should be sent This means that without any configuration all inter mesh traffic will be mTLS encrypted Gateways Any given request to a gateway will have two connections 1 The inbound request initiated by some client such as curl or a web browser This is often called the downstream connection 1 The outbound request initiated by the gateway to some backend This is often called the upstream connection Both of these connections have independent TLS configurations Note that the configuration of ingress and egress gateways are identical The istio ingress gateway and istio egress gateway are just two specialized gateway deployments The difference is that the client of an ingress gateway is running outside of the mesh while in the case of an egress gateway the destination is outside of the mesh Inbound As part of the inbound request the gateway must decode the traffic in order to apply routing rules This is done based on the server configuration in a Gateway resource docs reference config networking gateway For example if an inbound connection is plaintext HTTP the port protocol is configured as HTTP apiVersion networking istio io v1 kind Gateway servers port number 80 name http protocol HTTP Similarly for raw TCP traffic the protocol would be set to TCP For TLS connections there are a few more options 1 What protocol is encapsulated If the connection is HTTPS the server protocol should be configured as HTTPS Otherwise for a raw TCP connection encapsulated with TLS the protocol should be set to TLS 1 Is the TLS connection terminated or passed through For passthrough traffic configure the TLS mode field to PASSTHROUGH apiVersion networking istio io v1 kind Gateway servers port number 443 name https protocol HTTPS tls mode PASSTHROUGH In this mode Istio will route based on SNI information and forward the connection as is to the destination 1 Should mutual TLS be used Mutual TLS can be configured through the TLS mode MUTUAL When this is configured a client certificate will be requested and verified against the configured caCertificates or credentialName apiVersion networking istio io v1 kind Gateway servers port number 443 name https protocol HTTPS tls mode MUTUAL caCertificates Outbound While the inbound side configures what type of traffic to expect and how to process it the outbound configuration controls what type of traffic the gateway will send This is configured by the TLS settings in a DestinationRule just like external outbound traffic from sidecars sidecars or auto mTLS auto mtls by default The only difference is that you should be careful to consider the Gateway settings when configuring this For example if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination you will end up with double encryption docs ops common problems network issues double tls This works but is often not the desired behavior A VirtualService bound to the gateway needs care as well to ensure it is consistent docs ops common problems network issues gateway mismatch with the Gateway definition |
istio owner istio wg networking maintainers How to configure gateway network topology status Alpha keywords traffic management ingress gateway test yes weight 60 title Configuring Gateway Network Topology | ---
title: Configuring Gateway Network Topology
description: How to configure gateway network topology.
weight: 60
keywords: [traffic-management,ingress,gateway]
owner: istio/wg-networking-maintainers
test: yes
status: Alpha
---
## Forwarding external client attributes (IP address, certificate info) to destination workloads
Many applications require knowing the client IP address and certificate information of the originating request to behave
properly. Notable cases include logging and audit tools that require the client IP be populated and security tools,
such as Web Application Firewalls (WAF), that need this information to apply rule sets properly. The ability to
provide client attributes to services has long been a staple of reverse proxies. To forward these client
attributes to destination workloads, proxies use the `X-Forwarded-For` (XFF) and `X-Forwarded-Client-Cert` (XFCC) headers.
Today's networks vary widely in nature, but support for these attributes is a requirement no matter what the network topology is.
This information should be preserved
and forwarded whether the network uses cloud-based Load Balancers, on-premise Load Balancers, gateways that are
exposed directly to the internet, gateways that serve many intermediate proxies, and other deployment topologies not
specified.
While Istio provides an [ingress gateway](/docs/tasks/traffic-management/ingress/ingress-control/), given the varieties
of architectures mentioned above, reasonable defaults are not able to be shipped that support the proper forwarding of
client attributes to the destination workloads.
This becomes ever more vital as Istio multicluster deployment models become more common.
For more information on `X-Forwarded-For`, see the IETF's [RFC](https://tools.ietf.org/html/rfc7239).
## Configuring network topologies
Configuration of XFF and XFCC headers can be set globally for all gateway workloads via `MeshConfig` or per gateway using
a pod annotation. For example, to configure globally during install or upgrade when using an `IstioOperator` custom resource:
spec:
meshConfig:
defaultConfig:
gatewayTopology:
numTrustedProxies: <VALUE>
forwardClientCertDetails: <ENUM_VALUE>
You can also configure both of these settings by adding the `proxy.istio.io/config` annotation to the Pod spec
of your Istio ingress gateway.
...
metadata:
annotations:
"proxy.istio.io/config": '{"gatewayTopology" : { "numTrustedProxies": <VALUE>, "forwardClientCertDetails": <ENUM_VALUE> } }'
### Configuring X-Forwarded-For Headers
Applications rely on reverse proxies to forward client attributes in a request, such as `X-Forward-For` header. However, due to the variety of network
topologies that Istio can be deployed in, you must set the `numTrustedProxies` to the number of trusted proxies deployed in front
of the Istio gateway proxy, so that the client address can be extracted correctly.
This controls the value populated by the ingress gateway in the `X-Envoy-External-Address` header
which can be reliably used by the upstream services to access client's original IP address.
For example, if you have a cloud based Load Balancer and a reverse proxy in front of your Istio gateway, set `numTrustedProxies` to `2`.
Note that all proxies in front of the Istio gateway proxy must parse HTTP traffic and append to the `X-Forwarded-For`
header at each hop. If the number of entries in the `X-Forwarded-For` header is less than the number of
trusted hops configured, Envoy falls back to using the immediate downstream address as the trusted
client address. Please refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-for)
to understand how `X-Forwarded-For` headers and trusted client addresses are determined.
#### Example using X-Forwarded-For capability with httpbin
1. Run the following command to create a file named `topology.yaml` with `numTrustedProxies` set to `2` and install Istio:
$ cat <<EOF > topology.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
defaultConfig:
gatewayTopology:
numTrustedProxies: 2
EOF
$ istioctl install -f topology.yaml
If you previously installed an Istio ingress gateway, restart all ingress gateway pods after step 1.
1. Create an `httpbin` namespace:
$ kubectl create namespace httpbin
namespace/httpbin created
1. Set the `istio-injection` label to `enabled` for sidecar injection:
$ kubectl label --overwrite namespace httpbin istio-injection=enabled
namespace/httpbin labeled
1. Deploy `httpbin` in the `httpbin` namespace:
$ kubectl apply -n httpbin -f @samples/httpbin/httpbin.yaml@
1. Deploy a gateway associated with `httpbin`:
$ kubectl apply -n httpbin -f @samples/httpbin/httpbin-gateway.yaml@
$ kubectl apply -n httpbin -f @samples/httpbin/gateway-api/httpbin-gateway.yaml@
$ kubectl wait --for=condition=programmed gtw -n httpbin httpbin-gateway
6) Set a local `GATEWAY_URL` environmental variable based on your Istio ingress gateway's IP address:
$ export GATEWAY_URL=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ export GATEWAY_URL=$(kubectl get gateways.gateway.networking.k8s.io httpbin-gateway -n httpbin -ojsonpath='{.status.addresses[0].value}')
7) Run the following `curl` command to simulate a request with proxy addresses in the `X-Forwarded-For` header:
$ curl -s -H 'X-Forwarded-For: 56.5.6.7, 72.9.5.6, 98.1.2.3' "$GATEWAY_URL/get?show_env=true" | jq '.headers["X-Forwarded-For"][0]'
"56.5.6.7, 72.9.5.6, 98.1.2.3,10.244.0.1"
In the above example `$GATEWAY_URL` resolved to 10.244.0.1. This will not be the case in your environment.
The above output shows the request headers that the `httpbin` workload received. When the Istio gateway received this
request, it set the `X-Envoy-External-Address` header to the second to last (`numTrustedProxies: 2`) address in the
`X-Forwarded-For` header from your curl command. Additionally, the gateway appends its own IP to the `X-Forwarded-For`
header before forwarding it to the httpbin workload.
### Configuring X-Forwarded-Client-Cert Headers
From [Envoy's documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-client-cert)
regarding XFCC:
x-forwarded-client-cert (XFCC) is a proxy header which indicates certificate information of part or all of the clients
or proxies that a request has flowed through, on its way from the client to the server. A proxy may choose to
sanitize/append/forward the XFCC header before proxying the request.
To configure how XFCC headers are handled, set `forwardClientCertDetails` in your `IstioOperator`
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
defaultConfig:
gatewayTopology:
forwardClientCertDetails: <ENUM_VALUE>
where `ENUM_VALUE` can be of the following type.
| `ENUM_VALUE` | |
|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `UNDEFINED` | Field is not set. |
| `SANITIZE` | Do not send the XFCC header to the next hop. |
| `FORWARD_ONLY` | When the client connection is mTLS (Mutual TLS), forward the XFCC header in the request. |
| `APPEND_FORWARD` | When the client connection is mTLS, append the client certificate information to the request’s XFCC header and forward it. |
| `SANITIZE_SET` | When the client connection is mTLS, reset the XFCC header with the client certificate information and send it to the next hop. This is the default value for a gateway. |
| `ALWAYS_FORWARD_ONLY` | Always forward the XFCC header in the request, regardless of whether the client connection is mTLS. |
See the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-client-cert)
for examples of using this capability.
## PROXY Protocol
The [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) allows for exchanging and preservation of client attributes between TCP proxies,
without relying on L7 protocols such as HTTP and the `X-Forwarded-For` and `X-Envoy-External-Address` headers. It is intended for scenarios where an external TCP load balancer needs to proxy TCP traffic through an Istio gateway to a backend TCP service and still expose client attributes such as source IP to upstream TCP service endpoints. PROXY protocol can be enabled via `EnvoyFilter`.
PROXY protocol is only supported for TCP traffic forwarding by Envoy. See the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/other_features/ip_transparency#proxy-protocol) for more details, along with some important performance caveats.
PROXY protocol should not be used for L7 traffic, or for Istio gateways behind L7 load balancers.
If your external TCP load balancer is configured to forward TCP traffic and use the PROXY protocol, the Istio Gateway TCP listener must also be configured to accept the PROXY protocol.
To enable PROXY protocol on all TCP listeners on the gateways, set `proxyProtocol` in your `IstioOperator`. For example:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
defaultConfig:
gatewayTopology:
proxyProtocol: {}
Alternatively, deploy a gateway with the following pod annotation:
metadata:
annotations:
"proxy.istio.io/config": '{"gatewayTopology" : { "proxyProtocol": {} }}'
The client IP is retrieved from the PROXY protocol by the gateway and set (or appended) in the `X-Forwarded-For` and `X-Envoy-External-Address` header. Note that the PROXY protocol is mutually exclusive with L7 headers like `X-Forwarded-For` and `X-Envoy-External-Address`. When PROXY protocol is used in conjunction with the `gatewayTopology` configuration, the `numTrustedProxies` and the received `X-Forwarded-For` header takes precedence in determining the trusted client addresses, and PROXY protocol client information will be ignored.
Note that the above example only configures the Gateway to accept incoming PROXY protocol TCP traffic - See the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/other_features/ip_transparency#proxy-protocol) for examples of how to configure Envoy itself to communicate with upstream services using PROXY protocol. | istio | title Configuring Gateway Network Topology description How to configure gateway network topology weight 60 keywords traffic management ingress gateway owner istio wg networking maintainers test yes status Alpha Forwarding external client attributes IP address certificate info to destination workloads Many applications require knowing the client IP address and certificate information of the originating request to behave properly Notable cases include logging and audit tools that require the client IP be populated and security tools such as Web Application Firewalls WAF that need this information to apply rule sets properly The ability to provide client attributes to services has long been a staple of reverse proxies To forward these client attributes to destination workloads proxies use the X Forwarded For XFF and X Forwarded Client Cert XFCC headers Today s networks vary widely in nature but support for these attributes is a requirement no matter what the network topology is This information should be preserved and forwarded whether the network uses cloud based Load Balancers on premise Load Balancers gateways that are exposed directly to the internet gateways that serve many intermediate proxies and other deployment topologies not specified While Istio provides an ingress gateway docs tasks traffic management ingress ingress control given the varieties of architectures mentioned above reasonable defaults are not able to be shipped that support the proper forwarding of client attributes to the destination workloads This becomes ever more vital as Istio multicluster deployment models become more common For more information on X Forwarded For see the IETF s RFC https tools ietf org html rfc7239 Configuring network topologies Configuration of XFF and XFCC headers can be set globally for all gateway workloads via MeshConfig or per gateway using a pod annotation For example to configure globally during install or upgrade when using an IstioOperator custom resource spec meshConfig defaultConfig gatewayTopology numTrustedProxies VALUE forwardClientCertDetails ENUM VALUE You can also configure both of these settings by adding the proxy istio io config annotation to the Pod spec of your Istio ingress gateway metadata annotations proxy istio io config gatewayTopology numTrustedProxies VALUE forwardClientCertDetails ENUM VALUE Configuring X Forwarded For Headers Applications rely on reverse proxies to forward client attributes in a request such as X Forward For header However due to the variety of network topologies that Istio can be deployed in you must set the numTrustedProxies to the number of trusted proxies deployed in front of the Istio gateway proxy so that the client address can be extracted correctly This controls the value populated by the ingress gateway in the X Envoy External Address header which can be reliably used by the upstream services to access client s original IP address For example if you have a cloud based Load Balancer and a reverse proxy in front of your Istio gateway set numTrustedProxies to 2 Note that all proxies in front of the Istio gateway proxy must parse HTTP traffic and append to the X Forwarded For header at each hop If the number of entries in the X Forwarded For header is less than the number of trusted hops configured Envoy falls back to using the immediate downstream address as the trusted client address Please refer to the Envoy documentation https www envoyproxy io docs envoy latest configuration http http conn man headers x forwarded for to understand how X Forwarded For headers and trusted client addresses are determined Example using X Forwarded For capability with httpbin 1 Run the following command to create a file named topology yaml with numTrustedProxies set to 2 and install Istio cat EOF topology yaml apiVersion install istio io v1alpha1 kind IstioOperator spec meshConfig defaultConfig gatewayTopology numTrustedProxies 2 EOF istioctl install f topology yaml If you previously installed an Istio ingress gateway restart all ingress gateway pods after step 1 1 Create an httpbin namespace kubectl create namespace httpbin namespace httpbin created 1 Set the istio injection label to enabled for sidecar injection kubectl label overwrite namespace httpbin istio injection enabled namespace httpbin labeled 1 Deploy httpbin in the httpbin namespace kubectl apply n httpbin f samples httpbin httpbin yaml 1 Deploy a gateway associated with httpbin kubectl apply n httpbin f samples httpbin httpbin gateway yaml kubectl apply n httpbin f samples httpbin gateway api httpbin gateway yaml kubectl wait for condition programmed gtw n httpbin httpbin gateway 6 Set a local GATEWAY URL environmental variable based on your Istio ingress gateway s IP address export GATEWAY URL kubectl n istio system get service istio ingressgateway o jsonpath status loadBalancer ingress 0 ip export GATEWAY URL kubectl get gateways gateway networking k8s io httpbin gateway n httpbin ojsonpath status addresses 0 value 7 Run the following curl command to simulate a request with proxy addresses in the X Forwarded For header curl s H X Forwarded For 56 5 6 7 72 9 5 6 98 1 2 3 GATEWAY URL get show env true jq headers X Forwarded For 0 56 5 6 7 72 9 5 6 98 1 2 3 10 244 0 1 In the above example GATEWAY URL resolved to 10 244 0 1 This will not be the case in your environment The above output shows the request headers that the httpbin workload received When the Istio gateway received this request it set the X Envoy External Address header to the second to last numTrustedProxies 2 address in the X Forwarded For header from your curl command Additionally the gateway appends its own IP to the X Forwarded For header before forwarding it to the httpbin workload Configuring X Forwarded Client Cert Headers From Envoy s documentation https www envoyproxy io docs envoy latest configuration http http conn man headers x forwarded client cert regarding XFCC x forwarded client cert XFCC is a proxy header which indicates certificate information of part or all of the clients or proxies that a request has flowed through on its way from the client to the server A proxy may choose to sanitize append forward the XFCC header before proxying the request To configure how XFCC headers are handled set forwardClientCertDetails in your IstioOperator apiVersion install istio io v1alpha1 kind IstioOperator spec meshConfig defaultConfig gatewayTopology forwardClientCertDetails ENUM VALUE where ENUM VALUE can be of the following type ENUM VALUE UNDEFINED Field is not set SANITIZE Do not send the XFCC header to the next hop FORWARD ONLY When the client connection is mTLS Mutual TLS forward the XFCC header in the request APPEND FORWARD When the client connection is mTLS append the client certificate information to the request s XFCC header and forward it SANITIZE SET When the client connection is mTLS reset the XFCC header with the client certificate information and send it to the next hop This is the default value for a gateway ALWAYS FORWARD ONLY Always forward the XFCC header in the request regardless of whether the client connection is mTLS See the Envoy documentation https www envoyproxy io docs envoy latest configuration http http conn man headers x forwarded client cert for examples of using this capability PROXY Protocol The PROXY protocol https www haproxy org download 1 8 doc proxy protocol txt allows for exchanging and preservation of client attributes between TCP proxies without relying on L7 protocols such as HTTP and the X Forwarded For and X Envoy External Address headers It is intended for scenarios where an external TCP load balancer needs to proxy TCP traffic through an Istio gateway to a backend TCP service and still expose client attributes such as source IP to upstream TCP service endpoints PROXY protocol can be enabled via EnvoyFilter PROXY protocol is only supported for TCP traffic forwarding by Envoy See the Envoy documentation https www envoyproxy io docs envoy latest intro arch overview other features ip transparency proxy protocol for more details along with some important performance caveats PROXY protocol should not be used for L7 traffic or for Istio gateways behind L7 load balancers If your external TCP load balancer is configured to forward TCP traffic and use the PROXY protocol the Istio Gateway TCP listener must also be configured to accept the PROXY protocol To enable PROXY protocol on all TCP listeners on the gateways set proxyProtocol in your IstioOperator For example apiVersion install istio io v1alpha1 kind IstioOperator spec meshConfig defaultConfig gatewayTopology proxyProtocol Alternatively deploy a gateway with the following pod annotation metadata annotations proxy istio io config gatewayTopology proxyProtocol The client IP is retrieved from the PROXY protocol by the gateway and set or appended in the X Forwarded For and X Envoy External Address header Note that the PROXY protocol is mutually exclusive with L7 headers like X Forwarded For and X Envoy External Address When PROXY protocol is used in conjunction with the gatewayTopology configuration the numTrustedProxies and the received X Forwarded For header takes precedence in determining the trusted client addresses and PROXY protocol client information will be ignored Note that the above example only configures the Gateway to accept incoming PROXY protocol TCP traffic See the Envoy documentation https www envoyproxy io docs envoy latest intro arch overview other features ip transparency proxy protocol for examples of how to configure Envoy itself to communicate with upstream services using PROXY protocol |
istio weight 60 In addition to capturing application traffic Istio can also capture DNS requests to improve the performance and usability of your mesh title DNS Proxying owner istio wg networking maintainers How to configure DNS proxying test yes keywords traffic management dns virtual machine | ---
title: DNS Proxying
description: How to configure DNS proxying.
weight: 60
keywords: [traffic-management,dns,virtual-machine]
owner: istio/wg-networking-maintainers
test: yes
---
In addition to capturing application traffic, Istio can also capture DNS requests to improve the performance and usability of your mesh.
When proxying DNS, all DNS requests from an application will be redirected to the sidecar, which stores a local mapping of domain names to IP addresses. If the request can be handled by the sidecar, it will directly return a response to the application, avoiding a roundtrip to the upstream DNS server. Otherwise, the request is forwarded upstream following the standard `/etc/resolv.conf` DNS configuration.
While Kubernetes provides DNS resolution for Kubernetes `Service`s out of the box, any custom `ServiceEntry`s will not be recognized. With this feature, `ServiceEntry` addresses can be resolved without requiring custom configuration of a DNS server. For Kubernetes `Service`s, the DNS response will be the same, but with reduced load on `kube-dns` and increased performance.
This functionality is also available for services running outside of Kubernetes. This means that all internal services can be resolved without clunky workarounds to expose Kubernetes DNS entries outside of the cluster.
## Getting started
This feature is not currently enabled by default. To enable it, install Istio with the following settings:
$ cat <<EOF | istioctl install -y -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
defaultConfig:
proxyMetadata:
# Enable basic DNS proxying
ISTIO_META_DNS_CAPTURE: "true"
EOF
This can also be enabled on a per-pod basis with the [`proxy.istio.io/config` annotation](/docs/reference/config/annotations/):
kind: Deployment
metadata:
name: curl
spec:
...
template:
metadata:
annotations:
proxy.istio.io/config: |
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
...
When deploying to a VM using [`istioctl workload entry configure`](/docs/setup/install/virtual-machine/), basic DNS proxying will be enabled by default.
## DNS capture In action
To try out the DNS capture, first setup a `ServiceEntry` for some external service:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
name: external-address
spec:
addresses:
- 198.51.100.1
hosts:
- address.internal
ports:
- name: http
number: 80
protocol: HTTP
EOF
Bring up a client application to initiate the DNS request:
$ kubectl label namespace default istio-injection=enabled --overwrite
$ kubectl apply -f @samples/curl/curl.yaml@
Without the DNS capture, a request to `address.internal` would likely fail to resolve. Once this is enabled, you should instead get a response back based on the configured `address`:
$ kubectl exec deploy/curl -- curl -sS -v address.internal
* Trying 198.51.100.1:80...
## Address auto allocation
In the above example, you had a predefined IP address for the service to which you sent the request. However, it's common to access external services that do not have stable addresses, and instead rely on DNS. In this case, the DNS proxy will not have enough information to return a response, and will need to forward DNS requests upstream.
This is especially problematic with TCP traffic. Unlike HTTP requests, which are routed based on `Host` headers, TCP carries much less information; you can only route on the destination IP and port number. Because you don't have a stable IP for the backend, you cannot route based on that either, leaving only port number, which leads to conflicts when multiple `ServiceEntry`s for TCP services share the same port. Refer
to [the following section](#external-tcp-services-without-vips) for more details.
To work around these issues, the DNS proxy additionally supports automatically allocating addresses for `ServiceEntry`s that do not explicitly define one. The DNS response will include a distinct and automatically assigned address for each `ServiceEntry`. The proxy is then configured to match requests to this IP address, and forward the request to the corresponding `ServiceEntry`. Istio will automatically allocate non-routable VIPs (from the Class E subnet) to such services as long as they do not use a wildcard host. The Istio agent on the sidecar will use the VIPs as responses to the DNS lookup queries from the application. Envoy can now clearly distinguish traffic bound for each external TCP service and forward it to the right target.
Because this feature modifies DNS responses, it may not be compatible with all applications.
To try this out, configure another `ServiceEntry`:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
name: external-auto
spec:
hosts:
- auto.internal
ports:
- name: http
number: 80
protocol: HTTP
resolution: DNS
EOF
Now, send a request:
$ kubectl exec deploy/curl -- curl -sS -v auto.internal
* Trying 240.240.0.1:80...
As you can see, the request is sent to an automatically allocated address, `240.240.0.1`. These addresses will be picked from the `240.240.0.0/16` reserved IP address range to avoid conflicting with real services.
Users also have the flexibility for more granular configuration by adding the label `networking.istio.io/enable-autoallocate-ip="true/false"` to their `ServiceEntry`. This label configures whether a `ServiceEntry` without any `spec.addresses` set should get an IP address automatically allocated for it.
To try this out, update the existing `ServiceEntry` with the opt-out label:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
name: external-auto
labels:
networking.istio.io/enable-autoallocate-ip: "false"
spec:
hosts:
- auto.internal
ports:
- name: http
number: 80
protocol: HTTP
resolution: DNS
EOF
Now, send a request and verify that the auto allocation is no longer happening:
$ kubectl exec deploy/curl -- curl -sS -v auto.internal
* Could not resolve host: auto.internal
* shutting down connection #0
## External TCP services without VIPs
By default, Istio has a limitation when routing external TCP traffic because it is unable to distinguish between multiple TCP services on the same port. This limitation is particularly apparent when using third party databases such as AWS Relational Database Service or any database setup with geographical redundancy. Similar, but different external TCP services, cannot be handled separately by default. For the sidecar to distinguish traffic between two different TCP services that are outside of the mesh, the services must be on different ports or they need to have globally unique VIPs.
For example, if you have two external database services, `mysql-instance1` and `mysql-instance2`, and you create service entries for both, client sidecars will still have a single listener on `0.0.0.0:{port}` that looks up the IP address of only `mysql-instance1`, from public DNS servers, and forwards traffic to it. It cannot route traffic to `mysql-instance2` because it has no way of distinguishing whether traffic arriving at `0.0.0.0:{port}` is bound for `mysql-instance1` or `mysql-instance2`.
The following example shows how DNS proxying can be used to solve this problem.
A virtual IP address will be assigned to every service entry so that client sidecars can clearly distinguish traffic bound for each external TCP service.
1. Update the Istio configuration specified in the [Getting Started](#getting-started) section to also configure `discoverySelectors` that restrict the mesh to namespaces with `istio-injection` enabled. This will let us use any other namespaces in the cluster to run TCP services outside of the mesh.
$ cat <<EOF | istioctl install -y -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
defaultConfig:
proxyMetadata:
# Enable basic DNS proxying
ISTIO_META_DNS_CAPTURE: "true"
# discoverySelectors configuration below is just used for simulating the external service TCP scenario,
# so that we do not have to use an external site for testing.
discoverySelectors:
- matchLabels:
istio-injection: enabled
EOF
1. Deploy the first external sample TCP application:
$ kubectl create ns external-1
$ kubectl -n external-1 apply -f samples/tcp-echo/tcp-echo.yaml
1. Deploy the second external sample TCP application:
$ kubectl create ns external-2
$ kubectl -n external-2 apply -f samples/tcp-echo/tcp-echo.yaml
1. Configure `ServiceEntry` to reach external services:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
name: external-svc-1
spec:
hosts:
- tcp-echo.external-1.svc.cluster.local
ports:
- name: external-svc-1
number: 9000
protocol: TCP
resolution: DNS
---
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
name: external-svc-2
spec:
hosts:
- tcp-echo.external-2.svc.cluster.local
ports:
- name: external-svc-2
number: 9000
protocol: TCP
resolution: DNS
EOF
1. Verify listeners are configured separately for each service at the client side:
$ istioctl pc listener deploy/curl | grep tcp-echo | awk '{printf "ADDRESS=%s, DESTINATION=%s %s\n", $1, $4, $5}'
ADDRESS=240.240.105.94, DESTINATION=Cluster: outbound|9000||tcp-echo.external-2.svc.cluster.local
ADDRESS=240.240.69.138, DESTINATION=Cluster: outbound|9000||tcp-echo.external-1.svc.cluster.local
## Cleanup
$ kubectl -n external-1 delete -f @samples/tcp-echo/tcp-echo.yaml@
$ kubectl -n external-2 delete -f @samples/tcp-echo/tcp-echo.yaml@
$ kubectl delete -f @samples/curl/curl.yaml@
$ istioctl uninstall --purge -y
$ kubectl delete ns istio-system external-1 external-2
$ kubectl label namespace default istio-injection-
| istio | title DNS Proxying description How to configure DNS proxying weight 60 keywords traffic management dns virtual machine owner istio wg networking maintainers test yes In addition to capturing application traffic Istio can also capture DNS requests to improve the performance and usability of your mesh When proxying DNS all DNS requests from an application will be redirected to the sidecar which stores a local mapping of domain names to IP addresses If the request can be handled by the sidecar it will directly return a response to the application avoiding a roundtrip to the upstream DNS server Otherwise the request is forwarded upstream following the standard etc resolv conf DNS configuration While Kubernetes provides DNS resolution for Kubernetes Service s out of the box any custom ServiceEntry s will not be recognized With this feature ServiceEntry addresses can be resolved without requiring custom configuration of a DNS server For Kubernetes Service s the DNS response will be the same but with reduced load on kube dns and increased performance This functionality is also available for services running outside of Kubernetes This means that all internal services can be resolved without clunky workarounds to expose Kubernetes DNS entries outside of the cluster Getting started This feature is not currently enabled by default To enable it install Istio with the following settings cat EOF istioctl install y f apiVersion install istio io v1alpha1 kind IstioOperator spec meshConfig defaultConfig proxyMetadata Enable basic DNS proxying ISTIO META DNS CAPTURE true EOF This can also be enabled on a per pod basis with the proxy istio io config annotation docs reference config annotations kind Deployment metadata name curl spec template metadata annotations proxy istio io config proxyMetadata ISTIO META DNS CAPTURE true When deploying to a VM using istioctl workload entry configure docs setup install virtual machine basic DNS proxying will be enabled by default DNS capture In action To try out the DNS capture first setup a ServiceEntry for some external service kubectl apply f EOF apiVersion networking istio io v1 kind ServiceEntry metadata name external address spec addresses 198 51 100 1 hosts address internal ports name http number 80 protocol HTTP EOF Bring up a client application to initiate the DNS request kubectl label namespace default istio injection enabled overwrite kubectl apply f samples curl curl yaml Without the DNS capture a request to address internal would likely fail to resolve Once this is enabled you should instead get a response back based on the configured address kubectl exec deploy curl curl sS v address internal Trying 198 51 100 1 80 Address auto allocation In the above example you had a predefined IP address for the service to which you sent the request However it s common to access external services that do not have stable addresses and instead rely on DNS In this case the DNS proxy will not have enough information to return a response and will need to forward DNS requests upstream This is especially problematic with TCP traffic Unlike HTTP requests which are routed based on Host headers TCP carries much less information you can only route on the destination IP and port number Because you don t have a stable IP for the backend you cannot route based on that either leaving only port number which leads to conflicts when multiple ServiceEntry s for TCP services share the same port Refer to the following section external tcp services without vips for more details To work around these issues the DNS proxy additionally supports automatically allocating addresses for ServiceEntry s that do not explicitly define one The DNS response will include a distinct and automatically assigned address for each ServiceEntry The proxy is then configured to match requests to this IP address and forward the request to the corresponding ServiceEntry Istio will automatically allocate non routable VIPs from the Class E subnet to such services as long as they do not use a wildcard host The Istio agent on the sidecar will use the VIPs as responses to the DNS lookup queries from the application Envoy can now clearly distinguish traffic bound for each external TCP service and forward it to the right target Because this feature modifies DNS responses it may not be compatible with all applications To try this out configure another ServiceEntry kubectl apply f EOF apiVersion networking istio io v1 kind ServiceEntry metadata name external auto spec hosts auto internal ports name http number 80 protocol HTTP resolution DNS EOF Now send a request kubectl exec deploy curl curl sS v auto internal Trying 240 240 0 1 80 As you can see the request is sent to an automatically allocated address 240 240 0 1 These addresses will be picked from the 240 240 0 0 16 reserved IP address range to avoid conflicting with real services Users also have the flexibility for more granular configuration by adding the label networking istio io enable autoallocate ip true false to their ServiceEntry This label configures whether a ServiceEntry without any spec addresses set should get an IP address automatically allocated for it To try this out update the existing ServiceEntry with the opt out label kubectl apply f EOF apiVersion networking istio io v1 kind ServiceEntry metadata name external auto labels networking istio io enable autoallocate ip false spec hosts auto internal ports name http number 80 protocol HTTP resolution DNS EOF Now send a request and verify that the auto allocation is no longer happening kubectl exec deploy curl curl sS v auto internal Could not resolve host auto internal shutting down connection 0 External TCP services without VIPs By default Istio has a limitation when routing external TCP traffic because it is unable to distinguish between multiple TCP services on the same port This limitation is particularly apparent when using third party databases such as AWS Relational Database Service or any database setup with geographical redundancy Similar but different external TCP services cannot be handled separately by default For the sidecar to distinguish traffic between two different TCP services that are outside of the mesh the services must be on different ports or they need to have globally unique VIPs For example if you have two external database services mysql instance1 and mysql instance2 and you create service entries for both client sidecars will still have a single listener on 0 0 0 0 port that looks up the IP address of only mysql instance1 from public DNS servers and forwards traffic to it It cannot route traffic to mysql instance2 because it has no way of distinguishing whether traffic arriving at 0 0 0 0 port is bound for mysql instance1 or mysql instance2 The following example shows how DNS proxying can be used to solve this problem A virtual IP address will be assigned to every service entry so that client sidecars can clearly distinguish traffic bound for each external TCP service 1 Update the Istio configuration specified in the Getting Started getting started section to also configure discoverySelectors that restrict the mesh to namespaces with istio injection enabled This will let us use any other namespaces in the cluster to run TCP services outside of the mesh cat EOF istioctl install y f apiVersion install istio io v1alpha1 kind IstioOperator spec meshConfig defaultConfig proxyMetadata Enable basic DNS proxying ISTIO META DNS CAPTURE true discoverySelectors configuration below is just used for simulating the external service TCP scenario so that we do not have to use an external site for testing discoverySelectors matchLabels istio injection enabled EOF 1 Deploy the first external sample TCP application kubectl create ns external 1 kubectl n external 1 apply f samples tcp echo tcp echo yaml 1 Deploy the second external sample TCP application kubectl create ns external 2 kubectl n external 2 apply f samples tcp echo tcp echo yaml 1 Configure ServiceEntry to reach external services kubectl apply f EOF apiVersion networking istio io v1 kind ServiceEntry metadata name external svc 1 spec hosts tcp echo external 1 svc cluster local ports name external svc 1 number 9000 protocol TCP resolution DNS apiVersion networking istio io v1 kind ServiceEntry metadata name external svc 2 spec hosts tcp echo external 2 svc cluster local ports name external svc 2 number 9000 protocol TCP resolution DNS EOF 1 Verify listeners are configured separately for each service at the client side istioctl pc listener deploy curl grep tcp echo awk printf ADDRESS s DESTINATION s s n 1 4 5 ADDRESS 240 240 105 94 DESTINATION Cluster outbound 9000 tcp echo external 2 svc cluster local ADDRESS 240 240 69 138 DESTINATION Cluster outbound 9000 tcp echo external 1 svc cluster local Cleanup kubectl n external 1 delete f samples tcp echo tcp echo yaml kubectl n external 2 delete f samples tcp echo tcp echo yaml kubectl delete f samples curl curl yaml istioctl uninstall purge y kubectl delete ns istio system external 1 external 2 kubectl label namespace default istio injection |
istio weight 70 test no owner istio wg networking maintainers Within a multicluster mesh traffic rules specific to the cluster topology may be desirable This document describes How to configure how traffic is distributed among clusters in the mesh keywords traffic management multicluster title Multi cluster Traffic Management | ---
title: Multi-cluster Traffic Management
description: How to configure how traffic is distributed among clusters in the mesh.
weight: 70
keywords: [traffic-management,multicluster]
owner: istio/wg-networking-maintainers
test: no
---
Within a multicluster mesh, traffic rules specific to the cluster topology may be desirable. This document describes
a few ways to manage traffic in a multicluster mesh. Before reading this guide:
1. Read [Deployment Models](/docs/ops/deployment/deployment-models/#multiple-clusters)
1. Make sure your deployed services follow the concept of namespace sameness.
## Keeping traffic in-cluster
In some cases the default cross-cluster load balancing behavior is not desirable. To keep traffic "cluster-local" (i.e.
traffic sent from `cluster-a` will only reach destinations in `cluster-a`), mark hostnames or wildcards as `clusterLocal`
using [`MeshConfig.serviceSettings`](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ServiceSettings-Settings).
For example, you can enforce cluster-local traffic for an individual service, all services in a particular namespace, or globally for all services in the mesh, as follows:
serviceSettings:
- settings:
clusterLocal: true
hosts:
- "mysvc.myns.svc.cluster.local"
serviceSettings:
- settings:
clusterLocal: true
hosts:
- "*.myns.svc.cluster.local"
serviceSettings:
- settings:
clusterLocal: true
hosts:
- "*"
## Partitioning Services {#partitioning-services}
[`DestinationRule.subsets`](/docs/reference/config/networking/destination-rule/#Subset) allows partitioning a service
by selecting labels. These labels can be the labels from Kubernetes metadata, or from [built-in labels](/docs/reference/config/labels/).
One of these built-in labels, `topology.istio.io/cluster`, in the subset selector for a `DestinationRule` allows
creating per-cluster subsets.
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: mysvc-per-cluster-dr
spec:
host: mysvc.myns.svc.cluster.local
subsets:
- name: cluster-1
labels:
topology.istio.io/cluster: cluster-1
- name: cluster-2
labels:
topology.istio.io/cluster: cluster-2
Using these subsets you can create various routing rules based on the cluster such as [mirroring](/docs/tasks/traffic-management/mirroring/)
or [shifting](/docs/tasks/traffic-management/traffic-shifting/).
This provides another option to create cluster-local traffic rules by restricting the destination subset in a `VirtualService`:
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: mysvc-cluster-local-vs
spec:
hosts:
- mysvc.myns.svc.cluster.local
http:
- name: "cluster-1-local"
match:
- sourceLabels:
topology.istio.io/cluster: "cluster-1"
route:
- destination:
host: mysvc.myns.svc.cluster.local
subset: cluster-1
- name: "cluster-2-local"
match:
- sourceLabels:
topology.istio.io/cluster: "cluster-2"
route:
- destination:
host: mysvc.myns.svc.cluster.local
subset: cluster-2
Using subset-based routing this way to control cluster-local traffic, as opposed to
[`MeshConfig.serviceSettings`](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ServiceSettings-Settings),
has the downside of mixing service-level policy with topology-level policy.
For example, a rule that sends 10% of traffic to `v2` of a service will need twice the
number of subsets (e.g., `cluster-1-v2`, `cluster-2-v2`).
This approach is best limited to situations where more granular control of cluster-based routing is needed. | istio | title Multi cluster Traffic Management description How to configure how traffic is distributed among clusters in the mesh weight 70 keywords traffic management multicluster owner istio wg networking maintainers test no Within a multicluster mesh traffic rules specific to the cluster topology may be desirable This document describes a few ways to manage traffic in a multicluster mesh Before reading this guide 1 Read Deployment Models docs ops deployment deployment models multiple clusters 1 Make sure your deployed services follow the concept of namespace sameness Keeping traffic in cluster In some cases the default cross cluster load balancing behavior is not desirable To keep traffic cluster local i e traffic sent from cluster a will only reach destinations in cluster a mark hostnames or wildcards as clusterLocal using MeshConfig serviceSettings docs reference config istio mesh v1alpha1 MeshConfig ServiceSettings Settings For example you can enforce cluster local traffic for an individual service all services in a particular namespace or globally for all services in the mesh as follows serviceSettings settings clusterLocal true hosts mysvc myns svc cluster local serviceSettings settings clusterLocal true hosts myns svc cluster local serviceSettings settings clusterLocal true hosts Partitioning Services partitioning services DestinationRule subsets docs reference config networking destination rule Subset allows partitioning a service by selecting labels These labels can be the labels from Kubernetes metadata or from built in labels docs reference config labels One of these built in labels topology istio io cluster in the subset selector for a DestinationRule allows creating per cluster subsets apiVersion networking istio io v1 kind DestinationRule metadata name mysvc per cluster dr spec host mysvc myns svc cluster local subsets name cluster 1 labels topology istio io cluster cluster 1 name cluster 2 labels topology istio io cluster cluster 2 Using these subsets you can create various routing rules based on the cluster such as mirroring docs tasks traffic management mirroring or shifting docs tasks traffic management traffic shifting This provides another option to create cluster local traffic rules by restricting the destination subset in a VirtualService apiVersion networking istio io v1 kind VirtualService metadata name mysvc cluster local vs spec hosts mysvc myns svc cluster local http name cluster 1 local match sourceLabels topology istio io cluster cluster 1 route destination host mysvc myns svc cluster local subset cluster 1 name cluster 2 local match sourceLabels topology istio io cluster cluster 2 route destination host mysvc myns svc cluster local subset cluster 2 Using subset based routing this way to control cluster local traffic as opposed to MeshConfig serviceSettings docs reference config istio mesh v1alpha1 MeshConfig ServiceSettings Settings has the downside of mixing service level policy with topology level policy For example a rule that sends 10 of traffic to v2 of a service will need twice the number of subsets e g cluster 1 v2 cluster 2 v2 This approach is best limited to situations where more granular control of cluster based routing is needed |
istio linktitle Traffic Routing owner istio wg networking maintainers weight 30 How Istio routes traffic through the mesh keywords traffic management proxy title Understanding Traffic Routing test n a | ---
title: Understanding Traffic Routing
linktitle: Traffic Routing
description: How Istio routes traffic through the mesh.
weight: 30
keywords: [traffic-management,proxy]
owner: istio/wg-networking-maintainers
test: n/a
---
One of the goals of Istio is to act as a "transparent proxy" which can be dropped into an existing cluster, allowing traffic to continue to flow as before.
However, there are powerful ways Istio can manage traffic differently than a typical Kubernetes cluster because of the additional features such as request load balancing.
To understand what is happening in your mesh, it is important to understand how Istio routes traffic.
This document describes low level implementation details. For a higher level overview, check out the traffic management [Concepts](/docs/concepts/traffic-management/) or [Tasks](/docs/tasks/traffic-management/).
## Frontends and backends
In traffic routing in Istio, there are two primary phases:
* The "frontend" refers to how we match the type of traffic we are handling.
This is necessary to identify which backend to route traffic to, and which policies to apply.
For example, we may read the `Host` header of `http.ns.svc.cluster.local` and identify the request is intended for the `http` Service.
More information on how this matching works can be found below.
* The "backend" refers to where we send traffic once we have matched it.
Using the example above, after identifying the request as targeting the `http` Service, we would send it to an endpoint in that Service.
However, this selection is not always so simple; Istio allows customization of this logic, through `VirtualService` routing rules.
Standard Kubernetes networking has these same concepts, too, but they are much simpler and generally hidden.
When a `Service` is created, there is typically an associated frontend -- the automatically created DNS name (such as `http.ns.svc.cluster.local`),
and an automatically created IP address to represent the service (the `ClusterIP`).
Similarly, a backend is also created - the `Endpoints` or `EndpointSlice` - which represents all of the pods selected by the service.
## Protocols
Unlike Kubernetes, Istio has the ability to process application level protocols such as HTTP and TLS.
This allows for different types of [frontend](#frontends-and-backends) matching than is available in Kubernetes.
In general, there are three classes of protocols Istio understands:
* HTTP, which includes HTTP/1.1, HTTP/2, and gRPC. Note that this does not include TLS encrypted traffic (HTTPS).
* TLS, which includes HTTPS.
* Raw TCP bytes.
The [protocol selection](/docs/ops/configuration/traffic-management/protocol-selection/) document describes how Istio decides which protocol is used.
The use of "TCP" can be confusing, as in other contexts it is used to distinguish between other L4 protocols, such as UDP.
When referring to the TCP protocol in Istio, this typically means we are treating it as a raw stream of bytes,
and not parsing application level protocols such as TLS or HTTP.
## Traffic Routing
When an Envoy proxy receives a request, it must decide where, if anywhere, to forward it to.
By default, this will be to the original service that was requested, unless [customized](/docs/tasks/traffic-management/traffic-shifting/).
How this works depends on the protocol used.
### TCP
When processing TCP traffic, Istio has a very small amount of useful information to route the connection - only the destination IP and Port.
These attributes are used to determine the intended Service; the proxy is configured to listen on each service IP (`<Kubernetes ClusterIP>:<Port>`) pair and forward traffic to the upstream service.
For customizations, a TCP `VirtualService` can be configured, which allows [matching on specific IPs and ports](/docs/reference/config/networking/virtual-service/#L4MatchAttributes) and routing it to different upstream services than requested.
### TLS
When processing TLS traffic, Istio has slightly more information available than raw TCP: we can inspect the [SNI](https://en.wikipedia.org/wiki/Server_Name_Indication) field presented during the TLS handshake.
For standard Services, the same IP:Port matching is used as for raw TCP.
However, for services that do not have a Service IP defined, such as [ExternalName services](#externalname-services), the SNI field will be used for routing.
Additionally, custom routing can be configured with a TLS `VirtualService` to [match on SNI](/docs/reference/config/networking/virtual-service/#TLSMatchAttributes) and route requests to custom destinations.
### HTTP
HTTP allows much richer routing than TCP and TLS. With HTTP, you can route individual HTTP requests, rather than just connections.
In addition, a [number of rich attributes](/docs/reference/config/networking/virtual-service/#HTTPMatchRequest) are available, such as host, path, headers, query parameters, etc.
While TCP and TLS traffic generally behave the same with or without Istio (assuming no configuration has been applied to customize the routing), HTTP has significant differences.
* Istio will load balance individual requests. In general, this is highly desirable, especially in scenarios with long-lived connections such as gRPC and HTTP/2, where connection level load balancing is ineffective.
* Requests are routed based on the port and *`Host` header*, rather than port and IP. This means the destination IP address is effectively ignored. For example, `curl 8.8.8.8 -H "Host: productpage.default.svc.cluster.local"`, would be routed to the `productpage` Service.
## Unmatched traffic
If traffic cannot be matched using one of the methods described above, it is treated as [passthrough traffic](/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services).
By default, these requests will be forwarded as-is, which ensures that traffic to services that Istio is not aware of (such as external services that do not have `ServiceEntry`s created) continues to function.
Note that when these requests are forwarded, mutual TLS will not be used and telemetry collection is limited.
## Service types
Along with standard `ClusterIP` Services, Istio supports the full range of Kubernetes Services, with some caveats.
### `LoadBalancer` and `NodePort` Services
These Services are supersets of `ClusterIP` Services, and are mostly concerned with allowing access from external clients.
These service types are supported and behave exactly like standard `ClusterIP` Services.
### Headless Services
A [headless Service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) is a Service that does not have a `ClusterIP` assigned.
Instead, the DNS response will contain the IP addresses of each endpoint (i.e. the Pod IP) that is a part of the Service.
In general, Istio does not configure listeners for each Pod IP, as it works at the Service level.
However, to support headless services, listeners are set up for each IP:Port pair in the headless service.
An exception to this is for protocols declared as HTTP, which will match traffic by the `Host` header.
Without Istio, the `ports` field of a headless service is not strictly required because requests go directly to pod IPs, which can accept traffic on all ports.
However, with Istio the port must be declared in the Service, or it will [not be matched](/docs/ops/configuration/traffic-management/traffic-routing/#unmatched-traffic).
### ExternalName Services
An [ExternalName Service](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) is essentially just a DNS alias.
To make things more concrete, consider the following example:
apiVersion: v1
kind: Service
metadata:
name: alias
spec:
type: ExternalName
externalName: concrete.example.com
Because there is no `ClusterIP` nor pod IPs to match on, for TCP traffic there are no changes at all to traffic matching in Istio.
When Istio receives the request, they will see the IP for `concrete.example.com`.
If this is a service Istio knows about, it will be routed as described [above](#tcp).
If not, it will be handled as [unmatched traffic](#unmatched-traffic)
For HTTP and TLS, which match on hostname, things are a bit different.
If the target service (`concrete.example.com`) is a service Istio knows about, then the alias hostname (`alias.default.svc.cluster.local`) will be added
as an _additional_ match to the [TLS](#tls) or [HTTP](#http) matching.
If not, there will be no changes, so it will be handled as [unmatched traffic](#unmatched-traffic).
An `ExternalName` service can never be a [backend](#frontends-and-backends) on its own.
Instead, it is only ever used as additional [frontend](#frontends-and-backends) matches to existing Services.
If one is explicitly used as a backend, such as in a `VirtualService` destination, the same aliasing applies.
That is, if `alias.default.svc.cluster.local` is set as the destination, then requests will go to the `concrete.example.com`.
If that hostname is not known to Istio, the requests will fail; in this case, a `ServiceEntry` for `concrete.example.com` would make this configuration work.
### ServiceEntry
In addition to Kubernetes Services, [Service Entries](/docs/reference/config/networking/service-entry/#ServiceEntry) can be created to extend the set of services known to Istio.
This can be useful to ensure that traffic to external services, such as `example.com`, get the functionality of Istio.
A ServiceEntry with `addresses` set will perform routing just like a `ClusterIP` Service.
However, for Service Entries without any `addresses`, all IPs on the port will be matched.
This may prevent [unmatched traffic](#unmatched-traffic) on the same port from being forwarded correctly.
As such, it is best to avoid these where possible, or use dedicated ports when needed.
HTTP and TLS do not share this constraint, as routing is done based on the hostname/SNI.
The `addresses` field and `endpoints` field are often confused.
`addresses` refers to IPs that will be matched against, while endpoints refer to the set of IPs we will send traffic to.
For example, the Service entry below would match traffic for `1.1.1.1`, and send the request to `2.2.2.2` and `3.3.3.3` following the configured load balancing policy:
addresses: [1.1.1.1]
resolution: STATIC
endpoints:
- address: 2.2.2.2
- address: 3.3.3.3
| istio | title Understanding Traffic Routing linktitle Traffic Routing description How Istio routes traffic through the mesh weight 30 keywords traffic management proxy owner istio wg networking maintainers test n a One of the goals of Istio is to act as a transparent proxy which can be dropped into an existing cluster allowing traffic to continue to flow as before However there are powerful ways Istio can manage traffic differently than a typical Kubernetes cluster because of the additional features such as request load balancing To understand what is happening in your mesh it is important to understand how Istio routes traffic This document describes low level implementation details For a higher level overview check out the traffic management Concepts docs concepts traffic management or Tasks docs tasks traffic management Frontends and backends In traffic routing in Istio there are two primary phases The frontend refers to how we match the type of traffic we are handling This is necessary to identify which backend to route traffic to and which policies to apply For example we may read the Host header of http ns svc cluster local and identify the request is intended for the http Service More information on how this matching works can be found below The backend refers to where we send traffic once we have matched it Using the example above after identifying the request as targeting the http Service we would send it to an endpoint in that Service However this selection is not always so simple Istio allows customization of this logic through VirtualService routing rules Standard Kubernetes networking has these same concepts too but they are much simpler and generally hidden When a Service is created there is typically an associated frontend the automatically created DNS name such as http ns svc cluster local and an automatically created IP address to represent the service the ClusterIP Similarly a backend is also created the Endpoints or EndpointSlice which represents all of the pods selected by the service Protocols Unlike Kubernetes Istio has the ability to process application level protocols such as HTTP and TLS This allows for different types of frontend frontends and backends matching than is available in Kubernetes In general there are three classes of protocols Istio understands HTTP which includes HTTP 1 1 HTTP 2 and gRPC Note that this does not include TLS encrypted traffic HTTPS TLS which includes HTTPS Raw TCP bytes The protocol selection docs ops configuration traffic management protocol selection document describes how Istio decides which protocol is used The use of TCP can be confusing as in other contexts it is used to distinguish between other L4 protocols such as UDP When referring to the TCP protocol in Istio this typically means we are treating it as a raw stream of bytes and not parsing application level protocols such as TLS or HTTP Traffic Routing When an Envoy proxy receives a request it must decide where if anywhere to forward it to By default this will be to the original service that was requested unless customized docs tasks traffic management traffic shifting How this works depends on the protocol used TCP When processing TCP traffic Istio has a very small amount of useful information to route the connection only the destination IP and Port These attributes are used to determine the intended Service the proxy is configured to listen on each service IP Kubernetes ClusterIP Port pair and forward traffic to the upstream service For customizations a TCP VirtualService can be configured which allows matching on specific IPs and ports docs reference config networking virtual service L4MatchAttributes and routing it to different upstream services than requested TLS When processing TLS traffic Istio has slightly more information available than raw TCP we can inspect the SNI https en wikipedia org wiki Server Name Indication field presented during the TLS handshake For standard Services the same IP Port matching is used as for raw TCP However for services that do not have a Service IP defined such as ExternalName services externalname services the SNI field will be used for routing Additionally custom routing can be configured with a TLS VirtualService to match on SNI docs reference config networking virtual service TLSMatchAttributes and route requests to custom destinations HTTP HTTP allows much richer routing than TCP and TLS With HTTP you can route individual HTTP requests rather than just connections In addition a number of rich attributes docs reference config networking virtual service HTTPMatchRequest are available such as host path headers query parameters etc While TCP and TLS traffic generally behave the same with or without Istio assuming no configuration has been applied to customize the routing HTTP has significant differences Istio will load balance individual requests In general this is highly desirable especially in scenarios with long lived connections such as gRPC and HTTP 2 where connection level load balancing is ineffective Requests are routed based on the port and Host header rather than port and IP This means the destination IP address is effectively ignored For example curl 8 8 8 8 H Host productpage default svc cluster local would be routed to the productpage Service Unmatched traffic If traffic cannot be matched using one of the methods described above it is treated as passthrough traffic docs tasks traffic management egress egress control envoy passthrough to external services By default these requests will be forwarded as is which ensures that traffic to services that Istio is not aware of such as external services that do not have ServiceEntry s created continues to function Note that when these requests are forwarded mutual TLS will not be used and telemetry collection is limited Service types Along with standard ClusterIP Services Istio supports the full range of Kubernetes Services with some caveats LoadBalancer and NodePort Services These Services are supersets of ClusterIP Services and are mostly concerned with allowing access from external clients These service types are supported and behave exactly like standard ClusterIP Services Headless Services A headless Service https kubernetes io docs concepts services networking service headless services is a Service that does not have a ClusterIP assigned Instead the DNS response will contain the IP addresses of each endpoint i e the Pod IP that is a part of the Service In general Istio does not configure listeners for each Pod IP as it works at the Service level However to support headless services listeners are set up for each IP Port pair in the headless service An exception to this is for protocols declared as HTTP which will match traffic by the Host header Without Istio the ports field of a headless service is not strictly required because requests go directly to pod IPs which can accept traffic on all ports However with Istio the port must be declared in the Service or it will not be matched docs ops configuration traffic management traffic routing unmatched traffic ExternalName Services An ExternalName Service https kubernetes io docs concepts services networking service externalname is essentially just a DNS alias To make things more concrete consider the following example apiVersion v1 kind Service metadata name alias spec type ExternalName externalName concrete example com Because there is no ClusterIP nor pod IPs to match on for TCP traffic there are no changes at all to traffic matching in Istio When Istio receives the request they will see the IP for concrete example com If this is a service Istio knows about it will be routed as described above tcp If not it will be handled as unmatched traffic unmatched traffic For HTTP and TLS which match on hostname things are a bit different If the target service concrete example com is a service Istio knows about then the alias hostname alias default svc cluster local will be added as an additional match to the TLS tls or HTTP http matching If not there will be no changes so it will be handled as unmatched traffic unmatched traffic An ExternalName service can never be a backend frontends and backends on its own Instead it is only ever used as additional frontend frontends and backends matches to existing Services If one is explicitly used as a backend such as in a VirtualService destination the same aliasing applies That is if alias default svc cluster local is set as the destination then requests will go to the concrete example com If that hostname is not known to Istio the requests will fail in this case a ServiceEntry for concrete example com would make this configuration work ServiceEntry In addition to Kubernetes Services Service Entries docs reference config networking service entry ServiceEntry can be created to extend the set of services known to Istio This can be useful to ensure that traffic to external services such as example com get the functionality of Istio A ServiceEntry with addresses set will perform routing just like a ClusterIP Service However for Service Entries without any addresses all IPs on the port will be matched This may prevent unmatched traffic unmatched traffic on the same port from being forwarded correctly As such it is best to avoid these where possible or use dedicated ports when needed HTTP and TLS do not share this constraint as routing is done based on the hostname SNI The addresses field and endpoints field are often confused addresses refers to IPs that will be matched against while endpoints refer to the set of IPs we will send traffic to For example the Service entry below would match traffic for 1 1 1 1 and send the request to 2 2 2 2 and 3 3 3 3 following the configured load balancing policy addresses 1 1 1 1 resolution STATIC endpoints address 2 2 2 2 address 3 3 3 3 |
istio title Managing In Mesh Certificates linktitle Managing In Mesh Certificates weight 30 keywords traffic management proxy owner istio wg networking maintainers istio wg environments maintainers How to configure certificates within your mesh test n a | ---
title: Managing In-Mesh Certificates
linktitle: Managing In-Mesh Certificates
description: How to configure certificates within your mesh.
weight: 30
keywords: [traffic-management,proxy]
owner: istio/wg-networking-maintainers,istio/wg-environments-maintainers
test: n/a
---
Many users need to manage the types of the certificates used within their environment. For example,
some users require the use of Elliptical Curve Cryptography (ECC) while others may need to use a
stronger bit length for RSA certificates. Configuring certificates within your environment can be
a daunting task for most users.
This document is only intended to be used for in-mesh communication. For managing certificates at
your Gateway, see the [Secure Gateways](/docs/tasks/traffic-management/ingress/secure-ingress/) document.
For managing the CA used by istiod to generate workload certificates, see
the [Plugin CA Certificates](/docs/tasks/security/cert-management/plugin-ca-cert/) document.
## istiod
When Istio is installed without a root CA certificate, istiod will generate a self-signed
CA certificate using RSA 2048.
To change the self-signed CA certificate's bit length, you will need to modify either the IstioOperator manifest provided to
`istioctl` or the values file used during the Helm installation of the [istio-discovery](/manifests/charts/istio-control/istio-discovery) chart.
While there are many environment variables that can be changed for
[pilot-discovery](/docs/reference/commands/pilot-discovery/), this document will only
outline some of them.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
pilot:
env:
CITADEL_SELF_SIGNED_CA_RSA_KEY_SIZE: 4096
pilot:
env:
CITADEL_SELF_SIGNED_CA_RSA_KEY_SIZE: 4096
## Sidecars
Since sidecars manage their own certificates for in-mesh communication, the sidecars
are responsible for managing their private keys and generated Certificate Signing Request (CSRs). The sidecar
injector needs to be modified to inject the environment variables to be used for
this purpose.
While there are many environment variables that can be changed for
[pilot-agent](/docs/reference/commands/pilot-agent/), this document will only
outline some of them.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
defaultConfig:
proxyMetadata:
CITADEL_SELF_SIGNED_CA_RSA_KEY_SIZE: 4096
meshConfig:
defaultConfig:
proxyMetadata:
CITADEL_SELF_SIGNED_CA_RSA_KEY_SIZE: 4096
apiVersion: apps/v1
kind: Deployment
metadata:
name: curl
spec:
...
template:
metadata:
...
annotations:
...
proxy.istio.io/config: |
CITADEL_SELF_SIGNED_CA_RSA_KEY_SIZE: 4096
spec:
...
### Signature Algorithm
By default, the sidecars will create RSA certificates. If you want to change it to
ECC, you need to set `ECC_SIGNATURE_ALGORITHM` to `ECDSA`.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
defaultConfig:
proxyMetadata:
ECC_SIGNATURE_ALGORITHM: "ECDSA"
meshConfig:
defaultConfig:
proxyMetadata:
ECC_SIGNATURE_ALGORITHM: "ECDSA"
Only P256 and P384 are supported via `ECC_CURVE`.
If you prefer to retain RSA signature algorithms and want to modify the RSA key size,
you can change the value of `WORKLOAD_RSA_KEY_SIZE`. | istio | title Managing In Mesh Certificates linktitle Managing In Mesh Certificates description How to configure certificates within your mesh weight 30 keywords traffic management proxy owner istio wg networking maintainers istio wg environments maintainers test n a Many users need to manage the types of the certificates used within their environment For example some users require the use of Elliptical Curve Cryptography ECC while others may need to use a stronger bit length for RSA certificates Configuring certificates within your environment can be a daunting task for most users This document is only intended to be used for in mesh communication For managing certificates at your Gateway see the Secure Gateways docs tasks traffic management ingress secure ingress document For managing the CA used by istiod to generate workload certificates see the Plugin CA Certificates docs tasks security cert management plugin ca cert document istiod When Istio is installed without a root CA certificate istiod will generate a self signed CA certificate using RSA 2048 To change the self signed CA certificate s bit length you will need to modify either the IstioOperator manifest provided to istioctl or the values file used during the Helm installation of the istio discovery manifests charts istio control istio discovery chart While there are many environment variables that can be changed for pilot discovery docs reference commands pilot discovery this document will only outline some of them apiVersion install istio io v1alpha1 kind IstioOperator spec values pilot env CITADEL SELF SIGNED CA RSA KEY SIZE 4096 pilot env CITADEL SELF SIGNED CA RSA KEY SIZE 4096 Sidecars Since sidecars manage their own certificates for in mesh communication the sidecars are responsible for managing their private keys and generated Certificate Signing Request CSRs The sidecar injector needs to be modified to inject the environment variables to be used for this purpose While there are many environment variables that can be changed for pilot agent docs reference commands pilot agent this document will only outline some of them apiVersion install istio io v1alpha1 kind IstioOperator spec meshConfig defaultConfig proxyMetadata CITADEL SELF SIGNED CA RSA KEY SIZE 4096 meshConfig defaultConfig proxyMetadata CITADEL SELF SIGNED CA RSA KEY SIZE 4096 apiVersion apps v1 kind Deployment metadata name curl spec template metadata annotations proxy istio io config CITADEL SELF SIGNED CA RSA KEY SIZE 4096 spec Signature Algorithm By default the sidecars will create RSA certificates If you want to change it to ECC you need to set ECC SIGNATURE ALGORITHM to ECDSA apiVersion install istio io v1alpha1 kind IstioOperator spec meshConfig defaultConfig proxyMetadata ECC SIGNATURE ALGORITHM ECDSA meshConfig defaultConfig proxyMetadata ECC SIGNATURE ALGORITHM ECDSA Only P256 and P384 are supported via ECC CURVE If you prefer to retain RSA signature algorithms and want to modify the RSA key size you can change the value of WORKLOAD RSA KEY SIZE |
istio is a diagnostic tool that can detect potential issues with your owner istio wg user experience maintainers weight 40 title Diagnose your Configuration with Istioctl Analyze keywords istioctl debugging kubernetes Shows you how to use istioctl analyze to identify potential issues with your configuration test yes | ---
title: Diagnose your Configuration with Istioctl Analyze
description: Shows you how to use istioctl analyze to identify potential issues with your configuration.
weight: 40
keywords: [istioctl, debugging, kubernetes]
owner: istio/wg-user-experience-maintainers
test: yes
---
`istioctl analyze` is a diagnostic tool that can detect potential issues with your
Istio configuration. It can run against a live cluster or a set of local configuration files.
It can also run against a combination of the two, allowing you to catch problems before you
apply changes to a cluster.
## Getting started in under a minute
You can analyze your current live Kubernetes cluster by running:
$ istioctl analyze --all-namespaces
And that’s it! It’ll give you any recommendations that apply.
For example, if you forgot to enable Istio injection (a very common issue), you would get the following 'Info' message:
Info [IST0102] (Namespace default) The namespace is not enabled for Istio injection. Run 'kubectl label namespace default istio-injection=enabled' to enable it, or 'kubectl label namespace default istio-injection=disabled' to explicitly mark it as not needing injection.
Fix the issue:
$ kubectl label namespace default istio-injection=enabled
Then try again:
$ istioctl analyze --namespace default
✔ No validation issues found when analyzing namespace: default.
## Analyzing live clusters, local files, or both
Analyze the current live cluster, simulating the effect of applying additional yaml files
like `bookinfo-gateway.yaml` and `destination-rule-all.yaml` in the `samples/bookinfo/networking` directory:
$ istioctl analyze @samples/bookinfo/networking/bookinfo-gateway.yaml@ @samples/bookinfo/networking/destination-rule-all.yaml@
Error [IST0101] (Gateway default/bookinfo-gateway samples/bookinfo/networking/bookinfo-gateway.yaml:9) Referenced selector not found: "istio=ingressgateway"
Error [IST0101] (VirtualService default/bookinfo samples/bookinfo/networking/bookinfo-gateway.yaml:41) Referenced host not found: "productpage"
Error: Analyzers found issues when analyzing namespace: default.
See https://istio.io/v/docs/reference/config/analysis for more information about causes and resolutions.
Analyze the entire `networking` folder:
$ istioctl analyze samples/bookinfo/networking/
Analyze all yaml files in the `networking` folder:
$ istioctl analyze samples/bookinfo/networking/*.yaml
The above examples are doing analysis on a live cluster. The tool also supports performing analysis
of a set of local Kubernetes yaml configuration files, or on a combination of local files and a
live cluster. When analyzing a set of local files, the file-set is expected to be fully self-contained.
Typically, this is used to analyze the entire set of configuration files that are intended to be deployed
to a cluster. To use this feature, simply add the `--use-kube=false` flag.
Analyze all yaml files in the `networking` folder:
$ istioctl analyze --use-kube=false samples/bookinfo/networking/*.yaml
You can run `istioctl analyze --help` to see the full set of options.
## Advanced
### Enabling validation messages for resource status
Starting with v1.5, Istio can be set up to perform configuration analysis alongside
the configuration distribution that it is primarily responsible for, via the `istiod.enableAnalysis` flag.
This analysis uses the same logic and error messages as when using `istioctl analyze`.
Validation messages from the analysis are written to the status subresource of the affected Istio resource.
For example. if you have a misconfigured gateway on your "ratings" virtual service,
running `kubectl get virtualservice ratings` would give you something like:
apiVersion: networking.istio.io/v1
kind: VirtualService
...
spec:
gateways:
- bogus-gateway
hosts:
- ratings
...
status:
observedGeneration: "1"
validationMessages:
- documentationUrl: https://istio.io/v/docs/reference/config/analysis/ist0101/
level: ERROR
type:
code: IST0101
`enableAnalysis` runs in the background, and will keep the status field of a resource up to date
with its current validation status. Note that this isn't a replacement for `istioctl analyze`:
- Not all resources have a custom status field (e.g. Kubernetes `namespace` resources),
so messages attached to those resources won't show validation messages.
- `enableAnalysis` only works on Istio versions starting with 1.5, while
`istioctl analyze` can be used with older versions.
- While it makes it easy to see what's wrong with a particular resource,
it's harder to get a holistic view of validation status in the mesh.
You can enable this feature with:
$ istioctl install --set values.global.istiod.enableAnalysis=true
### Ignoring specific analyzer messages via CLI
Sometimes you might find it useful to hide or ignore analyzer messages in certain cases.
For example, imagine a situation where a message is emitted about a resource you don't have permissions to update:
$ istioctl analyze -k --namespace frod
Info [IST0102] (Namespace frod) The namespace is not enabled for Istio injection. Run 'kubectl label namespace frod istio-injection=enabled' to enable it, or 'kubectl label namespace frod istio-injection=disabled' to explicitly mark it as not needing injection.
Because you don't have permissions to update the namespace, you cannot resolve the message
by annotating the namespace. Instead, you can direct `istioctl analyze` to suppress the above message on the resource:
$ istioctl analyze -k --namespace frod --suppress "IST0102=Namespace frod"
✔ No validation issues found when analyzing namespace: frod.
The syntax used for suppression is the same syntax used throughout `istioctl` when referring to
resources: `<kind> <name>.<namespace>`, or just `<kind> <name>` for cluster-scoped resources like
`Namespace`. If you want to suppress multiple objects, you can either repeat the `--suppress` argument or use wildcards:
$ # Suppress code IST0102 on namespace frod and IST0107 on all pods in namespace baz
$ istioctl analyze -k --all-namespaces --suppress "IST0102=Namespace frod" --suppress "IST0107=Pod *.baz"
### Ignoring specific analyzer messages via annotations
You can also ignore specific analyzer messages using an annotation on the resource.
For example, to ignore code IST0107 (`MisplacedAnnotation`) on resource `deployment/my-deployment`:
$ kubectl annotate deployment my-deployment galley.istio.io/analyze-suppress=IST0107
To ignore multiple codes for a resource, separate each code with a comma:
$ kubectl annotate deployment my-deployment galley.istio.io/analyze-suppress=IST0107,IST0002
## Helping us improve this tool
We're continuing to add more analysis capability and we'd love your help in identifying more use cases.
If you've discovered some Istio configuration "gotcha", some tricky situation that caused you some
problems, open an issue and let us know. We might be able to automatically flag this problem so that
others can discover and avoid the problem in the first place.
To do this, [open an issue](https://github.com/istio/istio/issues) describing your scenario. For example:
- Look at all the virtual services
- For each, look at their list of gateways
- If some of the gateways don’t exist, produce an error
We already have an analyzer for this specific scenario, so this is just an example to illustrate what
kind of information you should provide.
## Q&A
- **What Istio release does this tool target?**
Like other `istioctl` tools, we generally recommend using a downloaded version
that matches the version deployed in your cluster.
For the time being, analysis is generally backwards compatible, so that you can,
for example, run the version of `istioctl analyze` against
a cluster running an older Istio 1.x version and expect to get useful feedback.
Analysis rules that are not meaningful with an older Istio release will be skipped.
If you decide to use the latest `istioctl` for analysis purposes on a cluster
running an older Istio version, we suggest that you keep it in a separate folder
from the version of the binary used to manage your deployed Istio release.
- **What analyzers are supported today?**
We're still working to documenting the analyzers. In the meantime, you can see
all the analyzers in the [Istio source](/pkg/config/analysis/analyzers).
You can also see what [configuration analysis messages](/docs/reference/config/analysis/)
are supported to get an idea of what is currently covered.
- **Can analysis do anything harmful to my cluster?**
Analysis never changes configuration state. It is a completely read-only operation
that will never alter the state of a cluster.
- **What about analysis that goes beyond configuration?**
Today, the analysis is purely based on Kubernetes configuration, but in the future
we’d like to expand beyond that. For example, we could allow analyzers to also look
at logs to generate recommendations.
- **Where can I find out how to fix the errors I'm getting?**
The set of [configuration analysis messages](/docs/reference/config/analysis/)
contains descriptions of each message along with suggested fixes. | istio | title Diagnose your Configuration with Istioctl Analyze description Shows you how to use istioctl analyze to identify potential issues with your configuration weight 40 keywords istioctl debugging kubernetes owner istio wg user experience maintainers test yes istioctl analyze is a diagnostic tool that can detect potential issues with your Istio configuration It can run against a live cluster or a set of local configuration files It can also run against a combination of the two allowing you to catch problems before you apply changes to a cluster Getting started in under a minute You can analyze your current live Kubernetes cluster by running istioctl analyze all namespaces And that s it It ll give you any recommendations that apply For example if you forgot to enable Istio injection a very common issue you would get the following Info message Info IST0102 Namespace default The namespace is not enabled for Istio injection Run kubectl label namespace default istio injection enabled to enable it or kubectl label namespace default istio injection disabled to explicitly mark it as not needing injection Fix the issue kubectl label namespace default istio injection enabled Then try again istioctl analyze namespace default No validation issues found when analyzing namespace default Analyzing live clusters local files or both Analyze the current live cluster simulating the effect of applying additional yaml files like bookinfo gateway yaml and destination rule all yaml in the samples bookinfo networking directory istioctl analyze samples bookinfo networking bookinfo gateway yaml samples bookinfo networking destination rule all yaml Error IST0101 Gateway default bookinfo gateway samples bookinfo networking bookinfo gateway yaml 9 Referenced selector not found istio ingressgateway Error IST0101 VirtualService default bookinfo samples bookinfo networking bookinfo gateway yaml 41 Referenced host not found productpage Error Analyzers found issues when analyzing namespace default See https istio io v docs reference config analysis for more information about causes and resolutions Analyze the entire networking folder istioctl analyze samples bookinfo networking Analyze all yaml files in the networking folder istioctl analyze samples bookinfo networking yaml The above examples are doing analysis on a live cluster The tool also supports performing analysis of a set of local Kubernetes yaml configuration files or on a combination of local files and a live cluster When analyzing a set of local files the file set is expected to be fully self contained Typically this is used to analyze the entire set of configuration files that are intended to be deployed to a cluster To use this feature simply add the use kube false flag Analyze all yaml files in the networking folder istioctl analyze use kube false samples bookinfo networking yaml You can run istioctl analyze help to see the full set of options Advanced Enabling validation messages for resource status Starting with v1 5 Istio can be set up to perform configuration analysis alongside the configuration distribution that it is primarily responsible for via the istiod enableAnalysis flag This analysis uses the same logic and error messages as when using istioctl analyze Validation messages from the analysis are written to the status subresource of the affected Istio resource For example if you have a misconfigured gateway on your ratings virtual service running kubectl get virtualservice ratings would give you something like apiVersion networking istio io v1 kind VirtualService spec gateways bogus gateway hosts ratings status observedGeneration 1 validationMessages documentationUrl https istio io v docs reference config analysis ist0101 level ERROR type code IST0101 enableAnalysis runs in the background and will keep the status field of a resource up to date with its current validation status Note that this isn t a replacement for istioctl analyze Not all resources have a custom status field e g Kubernetes namespace resources so messages attached to those resources won t show validation messages enableAnalysis only works on Istio versions starting with 1 5 while istioctl analyze can be used with older versions While it makes it easy to see what s wrong with a particular resource it s harder to get a holistic view of validation status in the mesh You can enable this feature with istioctl install set values global istiod enableAnalysis true Ignoring specific analyzer messages via CLI Sometimes you might find it useful to hide or ignore analyzer messages in certain cases For example imagine a situation where a message is emitted about a resource you don t have permissions to update istioctl analyze k namespace frod Info IST0102 Namespace frod The namespace is not enabled for Istio injection Run kubectl label namespace frod istio injection enabled to enable it or kubectl label namespace frod istio injection disabled to explicitly mark it as not needing injection Because you don t have permissions to update the namespace you cannot resolve the message by annotating the namespace Instead you can direct istioctl analyze to suppress the above message on the resource istioctl analyze k namespace frod suppress IST0102 Namespace frod No validation issues found when analyzing namespace frod The syntax used for suppression is the same syntax used throughout istioctl when referring to resources kind name namespace or just kind name for cluster scoped resources like Namespace If you want to suppress multiple objects you can either repeat the suppress argument or use wildcards Suppress code IST0102 on namespace frod and IST0107 on all pods in namespace baz istioctl analyze k all namespaces suppress IST0102 Namespace frod suppress IST0107 Pod baz Ignoring specific analyzer messages via annotations You can also ignore specific analyzer messages using an annotation on the resource For example to ignore code IST0107 MisplacedAnnotation on resource deployment my deployment kubectl annotate deployment my deployment galley istio io analyze suppress IST0107 To ignore multiple codes for a resource separate each code with a comma kubectl annotate deployment my deployment galley istio io analyze suppress IST0107 IST0002 Helping us improve this tool We re continuing to add more analysis capability and we d love your help in identifying more use cases If you ve discovered some Istio configuration gotcha some tricky situation that caused you some problems open an issue and let us know We might be able to automatically flag this problem so that others can discover and avoid the problem in the first place To do this open an issue https github com istio istio issues describing your scenario For example Look at all the virtual services For each look at their list of gateways If some of the gateways don t exist produce an error We already have an analyzer for this specific scenario so this is just an example to illustrate what kind of information you should provide Q A What Istio release does this tool target Like other istioctl tools we generally recommend using a downloaded version that matches the version deployed in your cluster For the time being analysis is generally backwards compatible so that you can for example run the version of istioctl analyze against a cluster running an older Istio 1 x version and expect to get useful feedback Analysis rules that are not meaningful with an older Istio release will be skipped If you decide to use the latest istioctl for analysis purposes on a cluster running an older Istio version we suggest that you keep it in a separate folder from the version of the binary used to manage your deployed Istio release What analyzers are supported today We re still working to documenting the analyzers In the meantime you can see all the analyzers in the Istio source pkg config analysis analyzers You can also see what configuration analysis messages docs reference config analysis are supported to get an idea of what is currently covered Can analysis do anything harmful to my cluster Analysis never changes configuration state It is a completely read only operation that will never alter the state of a cluster What about analysis that goes beyond configuration Today the analysis is purely based on Kubernetes configuration but in the future we d like to expand beyond that For example we could allow analyzers to also look at logs to generate recommendations Where can I find out how to fix the errors I m getting The set of configuration analysis messages docs reference config analysis contains descriptions of each message along with suggested fixes |
istio weight 20 keywords debug proxy status config pilot envoy aliases owner istio wg user experience maintainers Describes tools and techniques to diagnose Envoy configuration issues related to traffic management title Debugging Envoy and Istiod help ops misc help ops troubleshooting proxy cmd help ops traffic management proxy cmd | ---
title: Debugging Envoy and Istiod
description: Describes tools and techniques to diagnose Envoy configuration issues related to traffic management.
weight: 20
keywords: [debug,proxy,status,config,pilot,envoy]
aliases:
- /help/ops/traffic-management/proxy-cmd
- /help/ops/misc
- /help/ops/troubleshooting/proxy-cmd
owner: istio/wg-user-experience-maintainers
test: no
---
Istio provides two very valuable commands to help diagnose traffic management configuration problems,
the [`proxy-status`](/docs/reference/commands/istioctl/#istioctl-proxy-status)
and [`proxy-config`](/docs/reference/commands/istioctl/#istioctl-proxy-config) commands. The `proxy-status` command
allows you to get an overview of your mesh and identify the proxy causing the problem. Then `proxy-config` can be used
to inspect Envoy configuration and diagnose the issue.
If you want to try the commands described below, you can either:
* Have a Kubernetes cluster with Istio and Bookinfo installed (as described in
[installation steps](/docs/setup/getting-started/) and
[Bookinfo installation steps](/docs/examples/bookinfo/#deploying-the-application)).
OR
* Use similar commands against your own application running in a Kubernetes cluster.
## Get an overview of your mesh
The `proxy-status` command allows you to get an overview of your mesh. If you suspect one of your sidecars isn't
receiving configuration or is out of sync then `proxy-status` will tell you this.
$ istioctl proxy-status
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-558b8b4b76-qzqsg.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0
istio-ingressgateway-66c994c45c-cmb7x.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-6cf8d4f9cb-wm7x6 1.7.0
productpage-v1-6987489c74-nc7tj.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0
prometheus-7bdc59c94d-hcp59.istio-system SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0
ratings-v1-7dc98c7588-5m6xj.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0
reviews-v1-7f99cc4496-rtsqn.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0
reviews-v2-7d79d5bd5d-tj6kf.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0
reviews-v3-7dbcdcbc56-t8wrx.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0
If a proxy is missing from this list it means that it is not currently connected to a Istiod instance so will not be
receiving any configuration.
* `SYNCED` means that Envoy has acknowledged the last configuration Istiod has sent to it.
* `NOT SENT` means that Istiod hasn't sent anything to Envoy. This usually is because Istiod has nothing to send.
* `STALE` means that Istiod has sent an update to Envoy but has not received an acknowledgement. This usually indicates
a networking issue between Envoy and Istiod or a bug with Istio itself.
## Retrieve diffs between Envoy and Istiod
The `proxy-status` command can also be used to retrieve a diff between the configuration Envoy has loaded and the
configuration Istiod would send, by providing a proxy ID. This can help you determine exactly what is out of sync and
where the issue may lie.
$ istioctl proxy-status details-v1-6dcc6fbb9d-wsjz4.default
--- Istiod Clusters
+++ Envoy Clusters
@@ -374,36 +374,14 @@
"edsClusterConfig": {
"edsConfig": {
"ads": {
}
},
"serviceName": "outbound|443||public-cr0bdc785ce3f14722918080a97e1f26be-alb1.kube-system.svc.cluster.local"
- },
- "connectTimeout": "1.000s",
- "circuitBreakers": {
- "thresholds": [
- {
-
- }
- ]
- }
- }
- },
- {
- "cluster": {
- "name": "outbound|53||kube-dns.kube-system.svc.cluster.local",
- "type": "EDS",
- "edsClusterConfig": {
- "edsConfig": {
- "ads": {
-
- }
- },
- "serviceName": "outbound|53||kube-dns.kube-system.svc.cluster.local"
},
"connectTimeout": "1.000s",
"circuitBreakers": {
"thresholds": [
{
}
Listeners Match
Routes Match (RDS last loaded at Tue, 04 Aug 2020 11:52:54 IST)
Here you can see that the listeners and routes match but the clusters are out of sync.
## Deep dive into Envoy configuration
The `proxy-config` command can be used to see how a given Envoy instance is configured. This can then be used to
pinpoint any issues you are unable to detect by just looking through your Istio configuration and custom resources.
To get a basic summary of clusters, listeners or routes for a given pod use the command as follows (changing clusters
for listeners or routes when required):
$ istioctl proxy-config cluster -n istio-system istio-ingressgateway-7d6874b48f-qxhn5
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
BlackHoleCluster - - - STATIC
agent - - - STATIC
details.default.svc.cluster.local 9080 - outbound EDS details.default
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15443 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 853 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
...
productpage.default.svc.cluster.local 9080 - outbound EDS
prometheus_stats - - - STATIC
ratings.default.svc.cluster.local 9080 - outbound EDS
reviews.default.svc.cluster.local 9080 - outbound EDS
sds-grpc - - - STATIC
xds-grpc - - - STRICT_DNS
zipkin - - - STRICT_DNS
In order to debug Envoy you need to understand Envoy clusters/listeners/routes/endpoints and how they all interact.
We will use the `proxy-config` command with the `-o json` and filtering flags to follow Envoy as it determines where
to send a request from the `productpage` pod to the `reviews` pod at `reviews:9080`.
1. If you query the listener summary on a pod you will notice Istio generates the following listeners:
* A listener on `0.0.0.0:15006` that receives all inbound traffic to the pod and a listener on `0.0.0.0:15001` that receives all outbound traffic to the pod, then hands the request over to a virtual listener.
* A virtual listener per service IP, per each non-HTTP for outbound TCP/HTTPS traffic.
* A virtual listener on the pod IP for each exposed port for inbound traffic.
* A virtual listener on `0.0.0.0` per each HTTP port for outbound HTTP traffic.
$ istioctl proxy-config listeners productpage-v1-6c886ff494-7vxhs
ADDRESS PORT MATCH DESTINATION
10.96.0.10 53 ALL Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
0.0.0.0 80 App: HTTP Route: 80
0.0.0.0 80 ALL PassthroughCluster
10.100.93.102 443 ALL Cluster: outbound|443||istiod.istio-system.svc.cluster.local
10.111.121.13 443 ALL Cluster: outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
10.96.0.1 443 ALL Cluster: outbound|443||kubernetes.default.svc.cluster.local
10.100.93.102 853 App: HTTP Route: istiod.istio-system.svc.cluster.local:853
10.100.93.102 853 ALL Cluster: outbound|853||istiod.istio-system.svc.cluster.local
0.0.0.0 9080 App: HTTP Route: 9080
0.0.0.0 9080 ALL PassthroughCluster
0.0.0.0 9090 App: HTTP Route: 9090
0.0.0.0 9090 ALL PassthroughCluster
10.96.0.10 9153 App: HTTP Route: kube-dns.kube-system.svc.cluster.local:9153
10.96.0.10 9153 ALL Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0 15001 ALL PassthroughCluster
0.0.0.0 15006 Addr: 10.244.0.22/32:15021 inbound|15021|mgmt-15021|mgmtCluster
0.0.0.0 15006 Addr: 10.244.0.22/32:9080 Inline Route: /*
0.0.0.0 15006 Trans: tls; App: HTTP TLS; Addr: 0.0.0.0/0 Inline Route: /*
0.0.0.0 15006 App: HTTP; Addr: 0.0.0.0/0 Inline Route: /*
0.0.0.0 15006 App: Istio HTTP Plain; Addr: 10.244.0.22/32:9080 Inline Route: /*
0.0.0.0 15006 Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15010 App: HTTP Route: 15010
0.0.0.0 15010 ALL PassthroughCluster
10.100.93.102 15012 ALL Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0 15014 App: HTTP Route: 15014
0.0.0.0 15014 ALL PassthroughCluster
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
10.111.121.13 15021 App: HTTP Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
10.111.121.13 15021 ALL Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
10.111.121.13 15443 ALL Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
1. From the above summary you can see that every sidecar has a listener bound to `0.0.0.0:15006` which is where IP tables routes all inbound pod traffic to and a listener bound to `0.0.0.0:15001` which is where IP tables routes all outbound pod traffic to. The `0.0.0.0:15001` listener hands the request over to the virtual listener that best matches the original destination of the request, if it can find a matching one. Otherwise, it sends the request to the `PassthroughCluster` which connects to the destination directly.
$ istioctl proxy-config listeners productpage-v1-6c886ff494-7vxhs --port 15001 -o json
[
{
"name": "virtualOutbound",
"address": {
"socketAddress": {
"address": "0.0.0.0",
"portValue": 15001
}
},
"filterChains": [
{
"filters": [
{
"name": "istio.stats",
"typedConfig": {
"@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
"typeUrl": "type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm",
"value": {
"config": {
"configuration": "{\n \"debug\": \"false\",\n \"stat_prefix\": \"istio\"\n}\n",
"root_id": "stats_outbound",
"vm_config": {
"code": {
"local": {
"inline_string": "envoy.wasm.stats"
}
},
"runtime": "envoy.wasm.runtime.null",
"vm_id": "tcp_stats_outbound"
}
}
}
}
},
{
"name": "envoy.tcp_proxy",
"typedConfig": {
"@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
"statPrefix": "PassthroughCluster",
"cluster": "PassthroughCluster"
}
}
],
"name": "virtualOutbound-catchall-tcp"
}
],
"trafficDirection": "OUTBOUND",
"hiddenEnvoyDeprecatedUseOriginalDst": true
}
]
1. Our request is an outbound HTTP request to port `9080` this means it gets handed off to the `0.0.0.0:9080` virtual
listener. This listener then looks up the route configuration in its configured RDS. In this case it will be looking
up route `9080` in RDS configured by Istiod (via ADS).
$ istioctl proxy-config listeners productpage-v1-6c886ff494-7vxhs -o json --address 0.0.0.0 --port 9080
...
"rds": {
"configSource": {
"ads": {},
"resourceApiVersion": "V3"
},
"routeConfigName": "9080"
}
...
1. The `9080` route configuration only has a virtual host for each service. Our request is heading to the reviews
service so Envoy will select the virtual host to which our request matches a domain. Once matched on domain Envoy
looks for the first route that matches the request. In this case we don't have any advanced routing so there is only
one route that matches on everything. This route tells Envoy to send the request to the
`outbound|9080||reviews.default.svc.cluster.local` cluster.
$ istioctl proxy-config routes productpage-v1-6c886ff494-7vxhs --name 9080 -o json
[
{
"name": "9080",
"virtualHosts": [
{
"name": "reviews.default.svc.cluster.local:9080",
"domains": [
"reviews.default.svc.cluster.local",
"reviews",
"reviews.default.svc",
"reviews.default",
"10.98.88.0",
],
"routes": [
{
"name": "default",
"match": {
"prefix": "/"
},
"route": {
"cluster": "outbound|9080||reviews.default.svc.cluster.local",
"timeout": "0s",
}
}
]
...
1. This cluster is configured to retrieve the associated endpoints from Istiod (via ADS). So Envoy will then use the
`serviceName` field as a key to look up the list of Endpoints and proxy the request to one of them.
$ istioctl proxy-config cluster productpage-v1-6c886ff494-7vxhs --fqdn reviews.default.svc.cluster.local -o json
[
{
"name": "outbound|9080||reviews.default.svc.cluster.local",
"type": "EDS",
"edsClusterConfig": {
"edsConfig": {
"ads": {},
"resourceApiVersion": "V3"
},
"serviceName": "outbound|9080||reviews.default.svc.cluster.local"
},
"connectTimeout": "10s",
"circuitBreakers": {
"thresholds": [
{
"maxConnections": 4294967295,
"maxPendingRequests": 4294967295,
"maxRequests": 4294967295,
"maxRetries": 4294967295
}
]
},
}
]
1. To see the endpoints currently available for this cluster use the `proxy-config` endpoints command.
$ istioctl proxy-config endpoints productpage-v1-6c886ff494-7vxhs --cluster "outbound|9080||reviews.default.svc.cluster.local"
ENDPOINT STATUS OUTLIER CHECK CLUSTER
172.17.0.7:9080 HEALTHY OK outbound|9080||reviews.default.svc.cluster.local
172.17.0.8:9080 HEALTHY OK outbound|9080||reviews.default.svc.cluster.local
172.17.0.9:9080 HEALTHY OK outbound|9080||reviews.default.svc.cluster.local
## Inspecting bootstrap configuration
So far we have looked at configuration retrieved (mostly) from Istiod, however Envoy requires some bootstrap configuration that
includes information like where Istiod can be found. To view this use the following command:
$ istioctl proxy-config bootstrap -n istio-system istio-ingressgateway-7d6874b48f-qxhn5
{
"bootstrap": {
"node": {
"id": "router~172.30.86.14~istio-ingressgateway-7d6874b48f-qxhn5.istio-system~istio-system.svc.cluster.local",
"cluster": "istio-ingressgateway",
"metadata": {
"CLUSTER_ID": "Kubernetes",
"EXCHANGE_KEYS": "NAME,NAMESPACE,INSTANCE_IPS,LABELS,OWNER,PLATFORM_METADATA,WORKLOAD_NAME,MESH_ID,SERVICE_ACCOUNT,CLUSTER_ID",
"INSTANCE_IPS": "10.244.0.7",
"ISTIO_PROXY_SHA": "istio-proxy:f98b7e538920abc408fbc91c22a3b32bc854d9dc",
"ISTIO_VERSION": "1.7.0",
"LABELS": {
"app": "istio-ingressgateway",
"chart": "gateways",
"heritage": "Tiller",
"istio": "ingressgateway",
"pod-template-hash": "68bf7d7f94",
"release": "istio",
"service.istio.io/canonical-name": "istio-ingressgateway",
"service.istio.io/canonical-revision": "latest"
},
"MESH_ID": "cluster.local",
"NAME": "istio-ingressgateway-68bf7d7f94-sp226",
"NAMESPACE": "istio-system",
"OWNER": "kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway",
"ROUTER_MODE": "sni-dnat",
"SDS": "true",
"SERVICE_ACCOUNT": "istio-ingressgateway-service-account",
"WORKLOAD_NAME": "istio-ingressgateway"
},
"userAgentBuildVersion": {
"version": {
"majorNumber": 1,
"minorNumber": 15
},
"metadata": {
"build.type": "RELEASE",
"revision.sha": "f98b7e538920abc408fbc91c22a3b32bc854d9dc",
"revision.status": "Clean",
"ssl.version": "BoringSSL"
}
},
},
...
## Verifying connectivity to Istiod
Verifying connectivity to Istiod is a useful troubleshooting step. Every proxy container in the service mesh should be able to communicate with Istiod. This can be accomplished in a few simple steps:
1. Create a `curl` pod:
$ kubectl create namespace foo
$ kubectl apply -f <(istioctl kube-inject -f samples/curl/curl.yaml) -n foo
1. Test connectivity to Istiod using `curl`. The following example invokes the v1 registration API using default Istiod configuration parameters and mutual TLS enabled:
$ kubectl exec $(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name}) -c curl -n foo -- curl -sS istiod.istio-system:15014/version
You should receive a response listing the version of Istiod.
## What Envoy version is Istio using?
To find out the Envoy version used in deployment, you can `exec` into the container and query the `server_info` endpoint:
$ kubectl exec -it productpage-v1-6b746f74dc-9stvs -c istio-proxy -n default -- pilot-agent request GET server_info --log_as_json | jq {version}
{
"version": "2d4ec97f3ac7b3256d060e1bb8aa6c415f5cef63/1.17.0/Clean/RELEASE/BoringSSL"
}
| istio | title Debugging Envoy and Istiod description Describes tools and techniques to diagnose Envoy configuration issues related to traffic management weight 20 keywords debug proxy status config pilot envoy aliases help ops traffic management proxy cmd help ops misc help ops troubleshooting proxy cmd owner istio wg user experience maintainers test no Istio provides two very valuable commands to help diagnose traffic management configuration problems the proxy status docs reference commands istioctl istioctl proxy status and proxy config docs reference commands istioctl istioctl proxy config commands The proxy status command allows you to get an overview of your mesh and identify the proxy causing the problem Then proxy config can be used to inspect Envoy configuration and diagnose the issue If you want to try the commands described below you can either Have a Kubernetes cluster with Istio and Bookinfo installed as described in installation steps docs setup getting started and Bookinfo installation steps docs examples bookinfo deploying the application OR Use similar commands against your own application running in a Kubernetes cluster Get an overview of your mesh The proxy status command allows you to get an overview of your mesh If you suspect one of your sidecars isn t receiving configuration or is out of sync then proxy status will tell you this istioctl proxy status NAME CDS LDS EDS RDS ISTIOD VERSION details v1 558b8b4b76 qzqsg default SYNCED SYNCED SYNCED SYNCED istiod 6cf8d4f9cb wm7x6 1 7 0 istio ingressgateway 66c994c45c cmb7x istio system SYNCED SYNCED SYNCED NOT SENT istiod 6cf8d4f9cb wm7x6 1 7 0 productpage v1 6987489c74 nc7tj default SYNCED SYNCED SYNCED SYNCED istiod 6cf8d4f9cb wm7x6 1 7 0 prometheus 7bdc59c94d hcp59 istio system SYNCED SYNCED SYNCED SYNCED istiod 6cf8d4f9cb wm7x6 1 7 0 ratings v1 7dc98c7588 5m6xj default SYNCED SYNCED SYNCED SYNCED istiod 6cf8d4f9cb wm7x6 1 7 0 reviews v1 7f99cc4496 rtsqn default SYNCED SYNCED SYNCED SYNCED istiod 6cf8d4f9cb wm7x6 1 7 0 reviews v2 7d79d5bd5d tj6kf default SYNCED SYNCED SYNCED SYNCED istiod 6cf8d4f9cb wm7x6 1 7 0 reviews v3 7dbcdcbc56 t8wrx default SYNCED SYNCED SYNCED SYNCED istiod 6cf8d4f9cb wm7x6 1 7 0 If a proxy is missing from this list it means that it is not currently connected to a Istiod instance so will not be receiving any configuration SYNCED means that Envoy has acknowledged the last configuration Istiod has sent to it NOT SENT means that Istiod hasn t sent anything to Envoy This usually is because Istiod has nothing to send STALE means that Istiod has sent an update to Envoy but has not received an acknowledgement This usually indicates a networking issue between Envoy and Istiod or a bug with Istio itself Retrieve diffs between Envoy and Istiod The proxy status command can also be used to retrieve a diff between the configuration Envoy has loaded and the configuration Istiod would send by providing a proxy ID This can help you determine exactly what is out of sync and where the issue may lie istioctl proxy status details v1 6dcc6fbb9d wsjz4 default Istiod Clusters Envoy Clusters 374 36 374 14 edsClusterConfig edsConfig ads serviceName outbound 443 public cr0bdc785ce3f14722918080a97e1f26be alb1 kube system svc cluster local connectTimeout 1 000s circuitBreakers thresholds cluster name outbound 53 kube dns kube system svc cluster local type EDS edsClusterConfig edsConfig ads serviceName outbound 53 kube dns kube system svc cluster local connectTimeout 1 000s circuitBreakers thresholds Listeners Match Routes Match RDS last loaded at Tue 04 Aug 2020 11 52 54 IST Here you can see that the listeners and routes match but the clusters are out of sync Deep dive into Envoy configuration The proxy config command can be used to see how a given Envoy instance is configured This can then be used to pinpoint any issues you are unable to detect by just looking through your Istio configuration and custom resources To get a basic summary of clusters listeners or routes for a given pod use the command as follows changing clusters for listeners or routes when required istioctl proxy config cluster n istio system istio ingressgateway 7d6874b48f qxhn5 SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE BlackHoleCluster STATIC agent STATIC details default svc cluster local 9080 outbound EDS details default istio ingressgateway istio system svc cluster local 80 outbound EDS istio ingressgateway istio system svc cluster local 443 outbound EDS istio ingressgateway istio system svc cluster local 15021 outbound EDS istio ingressgateway istio system svc cluster local 15443 outbound EDS istiod istio system svc cluster local 443 outbound EDS istiod istio system svc cluster local 853 outbound EDS istiod istio system svc cluster local 15010 outbound EDS istiod istio system svc cluster local 15012 outbound EDS istiod istio system svc cluster local 15014 outbound EDS kube dns kube system svc cluster local 53 outbound EDS kube dns kube system svc cluster local 9153 outbound EDS kubernetes default svc cluster local 443 outbound EDS productpage default svc cluster local 9080 outbound EDS prometheus stats STATIC ratings default svc cluster local 9080 outbound EDS reviews default svc cluster local 9080 outbound EDS sds grpc STATIC xds grpc STRICT DNS zipkin STRICT DNS In order to debug Envoy you need to understand Envoy clusters listeners routes endpoints and how they all interact We will use the proxy config command with the o json and filtering flags to follow Envoy as it determines where to send a request from the productpage pod to the reviews pod at reviews 9080 1 If you query the listener summary on a pod you will notice Istio generates the following listeners A listener on 0 0 0 0 15006 that receives all inbound traffic to the pod and a listener on 0 0 0 0 15001 that receives all outbound traffic to the pod then hands the request over to a virtual listener A virtual listener per service IP per each non HTTP for outbound TCP HTTPS traffic A virtual listener on the pod IP for each exposed port for inbound traffic A virtual listener on 0 0 0 0 per each HTTP port for outbound HTTP traffic istioctl proxy config listeners productpage v1 6c886ff494 7vxhs ADDRESS PORT MATCH DESTINATION 10 96 0 10 53 ALL Cluster outbound 53 kube dns kube system svc cluster local 0 0 0 0 80 App HTTP Route 80 0 0 0 0 80 ALL PassthroughCluster 10 100 93 102 443 ALL Cluster outbound 443 istiod istio system svc cluster local 10 111 121 13 443 ALL Cluster outbound 443 istio ingressgateway istio system svc cluster local 10 96 0 1 443 ALL Cluster outbound 443 kubernetes default svc cluster local 10 100 93 102 853 App HTTP Route istiod istio system svc cluster local 853 10 100 93 102 853 ALL Cluster outbound 853 istiod istio system svc cluster local 0 0 0 0 9080 App HTTP Route 9080 0 0 0 0 9080 ALL PassthroughCluster 0 0 0 0 9090 App HTTP Route 9090 0 0 0 0 9090 ALL PassthroughCluster 10 96 0 10 9153 App HTTP Route kube dns kube system svc cluster local 9153 10 96 0 10 9153 ALL Cluster outbound 9153 kube dns kube system svc cluster local 0 0 0 0 15001 ALL PassthroughCluster 0 0 0 0 15006 Addr 10 244 0 22 32 15021 inbound 15021 mgmt 15021 mgmtCluster 0 0 0 0 15006 Addr 10 244 0 22 32 9080 Inline Route 0 0 0 0 15006 Trans tls App HTTP TLS Addr 0 0 0 0 0 Inline Route 0 0 0 0 15006 App HTTP Addr 0 0 0 0 0 Inline Route 0 0 0 0 15006 App Istio HTTP Plain Addr 10 244 0 22 32 9080 Inline Route 0 0 0 0 15006 Addr 0 0 0 0 0 InboundPassthroughClusterIpv4 0 0 0 0 15006 Trans tls App TCP TLS Addr 0 0 0 0 0 InboundPassthroughClusterIpv4 0 0 0 0 15010 App HTTP Route 15010 0 0 0 0 15010 ALL PassthroughCluster 10 100 93 102 15012 ALL Cluster outbound 15012 istiod istio system svc cluster local 0 0 0 0 15014 App HTTP Route 15014 0 0 0 0 15014 ALL PassthroughCluster 0 0 0 0 15021 ALL Inline Route healthz ready 10 111 121 13 15021 App HTTP Route istio ingressgateway istio system svc cluster local 15021 10 111 121 13 15021 ALL Cluster outbound 15021 istio ingressgateway istio system svc cluster local 0 0 0 0 15090 ALL Inline Route stats prometheus 10 111 121 13 15443 ALL Cluster outbound 15443 istio ingressgateway istio system svc cluster local 1 From the above summary you can see that every sidecar has a listener bound to 0 0 0 0 15006 which is where IP tables routes all inbound pod traffic to and a listener bound to 0 0 0 0 15001 which is where IP tables routes all outbound pod traffic to The 0 0 0 0 15001 listener hands the request over to the virtual listener that best matches the original destination of the request if it can find a matching one Otherwise it sends the request to the PassthroughCluster which connects to the destination directly istioctl proxy config listeners productpage v1 6c886ff494 7vxhs port 15001 o json name virtualOutbound address socketAddress address 0 0 0 0 portValue 15001 filterChains filters name istio stats typedConfig type type googleapis com udpa type v1 TypedStruct typeUrl type googleapis com envoy extensions filters network wasm v3 Wasm value config configuration n debug false n stat prefix istio n n root id stats outbound vm config code local inline string envoy wasm stats runtime envoy wasm runtime null vm id tcp stats outbound name envoy tcp proxy typedConfig type type googleapis com envoy config filter network tcp proxy v2 TcpProxy statPrefix PassthroughCluster cluster PassthroughCluster name virtualOutbound catchall tcp trafficDirection OUTBOUND hiddenEnvoyDeprecatedUseOriginalDst true 1 Our request is an outbound HTTP request to port 9080 this means it gets handed off to the 0 0 0 0 9080 virtual listener This listener then looks up the route configuration in its configured RDS In this case it will be looking up route 9080 in RDS configured by Istiod via ADS istioctl proxy config listeners productpage v1 6c886ff494 7vxhs o json address 0 0 0 0 port 9080 rds configSource ads resourceApiVersion V3 routeConfigName 9080 1 The 9080 route configuration only has a virtual host for each service Our request is heading to the reviews service so Envoy will select the virtual host to which our request matches a domain Once matched on domain Envoy looks for the first route that matches the request In this case we don t have any advanced routing so there is only one route that matches on everything This route tells Envoy to send the request to the outbound 9080 reviews default svc cluster local cluster istioctl proxy config routes productpage v1 6c886ff494 7vxhs name 9080 o json name 9080 virtualHosts name reviews default svc cluster local 9080 domains reviews default svc cluster local reviews reviews default svc reviews default 10 98 88 0 routes name default match prefix route cluster outbound 9080 reviews default svc cluster local timeout 0s 1 This cluster is configured to retrieve the associated endpoints from Istiod via ADS So Envoy will then use the serviceName field as a key to look up the list of Endpoints and proxy the request to one of them istioctl proxy config cluster productpage v1 6c886ff494 7vxhs fqdn reviews default svc cluster local o json name outbound 9080 reviews default svc cluster local type EDS edsClusterConfig edsConfig ads resourceApiVersion V3 serviceName outbound 9080 reviews default svc cluster local connectTimeout 10s circuitBreakers thresholds maxConnections 4294967295 maxPendingRequests 4294967295 maxRequests 4294967295 maxRetries 4294967295 1 To see the endpoints currently available for this cluster use the proxy config endpoints command istioctl proxy config endpoints productpage v1 6c886ff494 7vxhs cluster outbound 9080 reviews default svc cluster local ENDPOINT STATUS OUTLIER CHECK CLUSTER 172 17 0 7 9080 HEALTHY OK outbound 9080 reviews default svc cluster local 172 17 0 8 9080 HEALTHY OK outbound 9080 reviews default svc cluster local 172 17 0 9 9080 HEALTHY OK outbound 9080 reviews default svc cluster local Inspecting bootstrap configuration So far we have looked at configuration retrieved mostly from Istiod however Envoy requires some bootstrap configuration that includes information like where Istiod can be found To view this use the following command istioctl proxy config bootstrap n istio system istio ingressgateway 7d6874b48f qxhn5 bootstrap node id router 172 30 86 14 istio ingressgateway 7d6874b48f qxhn5 istio system istio system svc cluster local cluster istio ingressgateway metadata CLUSTER ID Kubernetes EXCHANGE KEYS NAME NAMESPACE INSTANCE IPS LABELS OWNER PLATFORM METADATA WORKLOAD NAME MESH ID SERVICE ACCOUNT CLUSTER ID INSTANCE IPS 10 244 0 7 ISTIO PROXY SHA istio proxy f98b7e538920abc408fbc91c22a3b32bc854d9dc ISTIO VERSION 1 7 0 LABELS app istio ingressgateway chart gateways heritage Tiller istio ingressgateway pod template hash 68bf7d7f94 release istio service istio io canonical name istio ingressgateway service istio io canonical revision latest MESH ID cluster local NAME istio ingressgateway 68bf7d7f94 sp226 NAMESPACE istio system OWNER kubernetes apis apps v1 namespaces istio system deployments istio ingressgateway ROUTER MODE sni dnat SDS true SERVICE ACCOUNT istio ingressgateway service account WORKLOAD NAME istio ingressgateway userAgentBuildVersion version majorNumber 1 minorNumber 15 metadata build type RELEASE revision sha f98b7e538920abc408fbc91c22a3b32bc854d9dc revision status Clean ssl version BoringSSL Verifying connectivity to Istiod Verifying connectivity to Istiod is a useful troubleshooting step Every proxy container in the service mesh should be able to communicate with Istiod This can be accomplished in a few simple steps 1 Create a curl pod kubectl create namespace foo kubectl apply f istioctl kube inject f samples curl curl yaml n foo 1 Test connectivity to Istiod using curl The following example invokes the v1 registration API using default Istiod configuration parameters and mutual TLS enabled kubectl exec kubectl get pod l app curl n foo o jsonpath items metadata name c curl n foo curl sS istiod istio system 15014 version You should receive a response listing the version of Istiod What Envoy version is Istio using To find out the Envoy version used in deployment you can exec into the container and query the server info endpoint kubectl exec it productpage v1 6b746f74dc 9stvs c istio proxy n default pilot agent request GET server info log as json jq version version 2d4ec97f3ac7b3256d060e1bb8aa6c415f5cef63 1 17 0 Clean RELEASE BoringSSL |
istio title Understand your Mesh with Istioctl Describe test no weight 30 aliases keywords traffic management istioctl debugging kubernetes docs ops troubleshooting istioctl describe owner istio wg user experience maintainers Shows you how to use istioctl describe to verify the configurations of a pod in your mesh | ---
title: Understand your Mesh with Istioctl Describe
description: Shows you how to use istioctl describe to verify the configurations of a pod in your mesh.
weight: 30
keywords: [traffic-management, istioctl, debugging, kubernetes]
aliases:
- /docs/ops/troubleshooting/istioctl-describe
owner: istio/wg-user-experience-maintainers
test: no
---
In Istio 1.3, we included the [`istioctl experimental describe`](/docs/reference/commands/istioctl/#istioctl-experimental-describe-pod)
command. This CLI command provides you with the information needed to understand
the configuration impacting a pod. This guide shows
you how to use this experimental sub-command to see if a pod is in the mesh and
verify its configuration.
The basic usage of the command is as follows:
$ istioctl experimental describe pod <pod-name>[.<namespace>]
Appending a namespace to the pod name has the same affect as using the `-n` option
of `istioctl` to specify a non-default namespace.
Just like all other `istioctl` commands, you can replace `experimental`
with `x` for convenience.
This guide assumes you have deployed the [Bookinfo](/docs/examples/bookinfo/)
sample in your mesh. If you haven't already done so,
[start the application's services](/docs/examples/bookinfo/#start-the-application-services)
and [determine the IP and port of the ingress](/docs/examples/bookinfo/#determine-the-ingress-ip-and-port)
before continuing.
## Verify a pod is in the mesh
The `istioctl describe` command returns a warning if the Envoy
proxy is not present in a pod or if the proxy has not started. Additionally, the command warns
if some of the [Istio requirements for pods](/docs/ops/deployment/application-requirements/)
are not met.
For example, the following command produces a warning indicating a `kube-dns`
pod is not part of the service mesh because it has no sidecar:
$ export KUBE_POD=$(kubectl -n kube-system get pod -l k8s-app=kube-dns -o jsonpath='{.items[0].metadata.name}')
$ istioctl x describe pod -n kube-system $KUBE_POD
Pod: coredns-f9fd979d6-2zsxk
Pod Ports: 53/UDP (coredns), 53 (coredns), 9153 (coredns)
WARNING: coredns-f9fd979d6-2zsxk is not part of mesh; no Istio sidecar
--------------------
2021-01-22T16:10:14.080091Z error klog an error occurred forwarding 42785 -> 15000: error forwarding port 15000 to pod 692362a4fe313005439a873a1019a62f52ecd02c3de9a0957cd0af8f947866e5, uid : failed to execute portforward in network namespace "/var/run/netns/cni-3c000d0a-fb1c-d9df-8af8-1403e6803c22": failed to dial 15000: dial tcp4 127.0.0.1:15000: connect: connection refused[]
Error: failed to execute command on sidecar: failure running port forward process: Get "http://localhost:42785/config_dump": EOF
The command will not produce such a warning for a pod that is part of the mesh,
the Bookinfo `ratings` service for example, but instead will output the Istio configuration applied to the pod:
$ export RATINGS_POD=$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')
$ istioctl experimental describe pod $RATINGS_POD
Pod: ratings-v1-7dc98c7588-8jsbw
Pod Ports: 9080 (ratings), 15090 (istio-proxy)
--------------------
Service: ratings
Port: http 9080/HTTP targets pod port 9080
The output shows the following information:
- The ports of the service container in the pod, `9080` for the `ratings` container in this example.
- The ports of the `istio-proxy` container in the pod, `15090` in this example.
- The protocol used by the service in the pod, `HTTP` over port `9080` in this example.
## Verify destination rule configurations
You can use `istioctl describe` to see what
[destination rules](/docs/concepts/traffic-management/#destination-rules) apply to requests
to a pod. For example, apply the Bookinfo
[mutual TLS destination rules](/samples/bookinfo/networking/destination-rule-all-mtls.yaml):
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@
Now describe the `ratings` pod again:
$ istioctl x describe pod $RATINGS_POD
Pod: ratings-v1-f745cf57b-qrxl2
Pod Ports: 9080 (ratings), 15090 (istio-proxy)
--------------------
Service: ratings
Port: http 9080/HTTP
DestinationRule: ratings for "ratings"
Matching subsets: v1
(Non-matching subsets v2,v2-mysql,v2-mysql-vm)
Traffic Policy TLS Mode: ISTIO_MUTUAL
The command now shows additional output:
- The `ratings` destination rule applies to request to the `ratings` service.
- The subset of the `ratings` destination rule that matches the pod, `v1` in this example.
- The other subsets defined by the destination rule.
- The pod accepts either HTTP or mutual TLS requests but clients use mutual TLS.
## Verify virtual service configurations
When [virtual services](/docs/concepts/traffic-management/#virtual-services) configure
routes to a pod, `istioctl describe` will also include the routes in its output.
For example, apply the
[Bookinfo virtual services](/samples/bookinfo/networking/virtual-service-all-v1.yaml)
that route all requests to `v1` pods:
$ kubectl apply -f @samples/bookinfo/networking/virtual-service-all-v1.yaml@
Then, describe a pod implementing `v1` of the `reviews` service:
$ export REVIEWS_V1_POD=$(kubectl get pod -l app=reviews,version=v1 -o jsonpath='{.items[0].metadata.name}')
$ istioctl x describe pod $REVIEWS_V1_POD
...
VirtualService: reviews
1 HTTP route(s)
The output contains similar information to that shown previously for the `ratings` pod,
but it also includes the virtual service's routes to the pod.
The `istioctl describe` command doesn't just show the virtual services impacting the pod.
If a virtual service configures the service host of a pod but no traffic will reach it,
the command's output includes a warning. This case can occur if the virtual service
actually blocks traffic by never routing traffic to the pod's subset. For
example:
$ export REVIEWS_V2_POD=$(kubectl get pod -l app=reviews,version=v2 -o jsonpath='{.items[0].metadata.name}')
$ istioctl x describe pod $REVIEWS_V2_POD
...
VirtualService: reviews
WARNING: No destinations match pod subsets (checked 1 HTTP routes)
Route to non-matching subset v1 for (everything)
The warning includes the cause of the problem, how many routes were checked, and
even gives you information about the other routes in place. In this example,
no traffic arrives at the `v2` pod because the route in the virtual service directs all
traffic to the `v1` subset.
If you now delete the Bookinfo destination rules:
$ kubectl delete -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@
You can see another useful feature of `istioctl describe`:
$ istioctl x describe pod $REVIEWS_V1_POD
...
VirtualService: reviews
WARNING: No destinations match pod subsets (checked 1 HTTP routes)
Warning: Route to subset v1 but NO DESTINATION RULE defining subsets!
The output shows you that you deleted the destination rule but not the virtual
service that depends on it. The virtual service routes traffic to the `v1`
subset, but there is no destination rule defining the `v1` subset.
Thus, traffic destined for version `v1` can't flow to the pod.
If you refresh the browser to send a new request to Bookinfo at this
point, you would see the following message: `Error fetching product reviews`.
To fix the problem, reapply the destination rule:
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@
Reloading the browser shows the app working again and
running `istioctl experimental describe pod $REVIEWS_V1_POD` no longer produces
warnings.
## Verifying traffic routes
The `istioctl describe` command shows split traffic weights too.
For example, run the following command to route 90% of traffic to the `v1` subset
and 10% to the `v2` subset of the `reviews` service:
$ kubectl apply -f @samples/bookinfo/networking/virtual-service-reviews-90-10.yaml@
Now describe the `reviews v1` pod:
$ istioctl x describe pod $REVIEWS_V1_POD
...
VirtualService: reviews
Weight 90%
The output shows that the `reviews` virtual service has a weight of 90% for the
`v1` subset.
This function is also helpful for other types of routing. For example, you can deploy
header-specific routing:
$ kubectl apply -f @samples/bookinfo/networking/virtual-service-reviews-jason-v2-v3.yaml@
Then, describe the pod again:
$ istioctl x describe pod $REVIEWS_V1_POD
...
VirtualService: reviews
WARNING: No destinations match pod subsets (checked 2 HTTP routes)
Route to non-matching subset v2 for (when headers are end-user=jason)
Route to non-matching subset v3 for (everything)
The output produces a warning since you are describing a pod in the `v1` subset.
However, the virtual service configuration you applied routes traffic to the `v2`
subset if the header contains `end-user=jason` and to the `v3` subset in all
other cases.
## Verifying strict mutual TLS
Following the [mutual TLS migration](/docs/tasks/security/authentication/mtls-migration/)
instructions, you can enable strict mutual TLS for the `ratings` service:
$ kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: ratings-strict
spec:
selector:
matchLabels:
app: ratings
mtls:
mode: STRICT
EOF
Run the following command to describe the `ratings` pod:
$ istioctl x describe pod $RATINGS_POD
Pilot reports that pod enforces mTLS and clients speak mTLS
The output reports that requests to the `ratings` pod are now locked down and secure.
Sometimes, however, a deployment breaks when switching mutual TLS to `STRICT`.
The likely cause is that the destination rule didn't match the new configuration.
For example, if you configure the Bookinfo clients to not use mutual TLS using the
[plain HTTP destination rules](/samples/bookinfo/networking/destination-rule-all.yaml):
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all.yaml@
If you open Bookinfo in your browser, you see `Ratings service is currently unavailable`.
To learn why, run the following command:
$ istioctl x describe pod $RATINGS_POD
...
WARNING Pilot predicts TLS Conflict on ratings-v1-f745cf57b-qrxl2 port 9080 (pod enforces mTLS, clients speak HTTP)
Check DestinationRule ratings/default and AuthenticationPolicy ratings-strict/default
The output includes a warning describing the conflict
between the destination rule and the authentication policy.
You can restore correct behavior by applying a destination rule that uses
mutual TLS:
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@
## Conclusion and cleanup
Our goal with the `istioctl x describe` command is to help you understand the
traffic and security configurations in your Istio mesh.
We would love to hear your ideas for improvements!
Please join us at [https://discuss.istio.io](https://discuss.istio.io).
To remove the Bookinfo pods and configurations used in this guide, run the
following commands:
$ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo.yaml@
$ kubectl delete -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
$ kubectl delete -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@
$ kubectl delete -f @samples/bookinfo/networking/virtual-service-all-v1.yaml@
| istio | title Understand your Mesh with Istioctl Describe description Shows you how to use istioctl describe to verify the configurations of a pod in your mesh weight 30 keywords traffic management istioctl debugging kubernetes aliases docs ops troubleshooting istioctl describe owner istio wg user experience maintainers test no In Istio 1 3 we included the istioctl experimental describe docs reference commands istioctl istioctl experimental describe pod command This CLI command provides you with the information needed to understand the configuration impacting a pod This guide shows you how to use this experimental sub command to see if a pod is in the mesh and verify its configuration The basic usage of the command is as follows istioctl experimental describe pod pod name namespace Appending a namespace to the pod name has the same affect as using the n option of istioctl to specify a non default namespace Just like all other istioctl commands you can replace experimental with x for convenience This guide assumes you have deployed the Bookinfo docs examples bookinfo sample in your mesh If you haven t already done so start the application s services docs examples bookinfo start the application services and determine the IP and port of the ingress docs examples bookinfo determine the ingress ip and port before continuing Verify a pod is in the mesh The istioctl describe command returns a warning if the Envoy proxy is not present in a pod or if the proxy has not started Additionally the command warns if some of the Istio requirements for pods docs ops deployment application requirements are not met For example the following command produces a warning indicating a kube dns pod is not part of the service mesh because it has no sidecar export KUBE POD kubectl n kube system get pod l k8s app kube dns o jsonpath items 0 metadata name istioctl x describe pod n kube system KUBE POD Pod coredns f9fd979d6 2zsxk Pod Ports 53 UDP coredns 53 coredns 9153 coredns WARNING coredns f9fd979d6 2zsxk is not part of mesh no Istio sidecar 2021 01 22T16 10 14 080091Z error klog an error occurred forwarding 42785 15000 error forwarding port 15000 to pod 692362a4fe313005439a873a1019a62f52ecd02c3de9a0957cd0af8f947866e5 uid failed to execute portforward in network namespace var run netns cni 3c000d0a fb1c d9df 8af8 1403e6803c22 failed to dial 15000 dial tcp4 127 0 0 1 15000 connect connection refused Error failed to execute command on sidecar failure running port forward process Get http localhost 42785 config dump EOF The command will not produce such a warning for a pod that is part of the mesh the Bookinfo ratings service for example but instead will output the Istio configuration applied to the pod export RATINGS POD kubectl get pod l app ratings o jsonpath items 0 metadata name istioctl experimental describe pod RATINGS POD Pod ratings v1 7dc98c7588 8jsbw Pod Ports 9080 ratings 15090 istio proxy Service ratings Port http 9080 HTTP targets pod port 9080 The output shows the following information The ports of the service container in the pod 9080 for the ratings container in this example The ports of the istio proxy container in the pod 15090 in this example The protocol used by the service in the pod HTTP over port 9080 in this example Verify destination rule configurations You can use istioctl describe to see what destination rules docs concepts traffic management destination rules apply to requests to a pod For example apply the Bookinfo mutual TLS destination rules samples bookinfo networking destination rule all mtls yaml kubectl apply f samples bookinfo networking destination rule all mtls yaml Now describe the ratings pod again istioctl x describe pod RATINGS POD Pod ratings v1 f745cf57b qrxl2 Pod Ports 9080 ratings 15090 istio proxy Service ratings Port http 9080 HTTP DestinationRule ratings for ratings Matching subsets v1 Non matching subsets v2 v2 mysql v2 mysql vm Traffic Policy TLS Mode ISTIO MUTUAL The command now shows additional output The ratings destination rule applies to request to the ratings service The subset of the ratings destination rule that matches the pod v1 in this example The other subsets defined by the destination rule The pod accepts either HTTP or mutual TLS requests but clients use mutual TLS Verify virtual service configurations When virtual services docs concepts traffic management virtual services configure routes to a pod istioctl describe will also include the routes in its output For example apply the Bookinfo virtual services samples bookinfo networking virtual service all v1 yaml that route all requests to v1 pods kubectl apply f samples bookinfo networking virtual service all v1 yaml Then describe a pod implementing v1 of the reviews service export REVIEWS V1 POD kubectl get pod l app reviews version v1 o jsonpath items 0 metadata name istioctl x describe pod REVIEWS V1 POD VirtualService reviews 1 HTTP route s The output contains similar information to that shown previously for the ratings pod but it also includes the virtual service s routes to the pod The istioctl describe command doesn t just show the virtual services impacting the pod If a virtual service configures the service host of a pod but no traffic will reach it the command s output includes a warning This case can occur if the virtual service actually blocks traffic by never routing traffic to the pod s subset For example export REVIEWS V2 POD kubectl get pod l app reviews version v2 o jsonpath items 0 metadata name istioctl x describe pod REVIEWS V2 POD VirtualService reviews WARNING No destinations match pod subsets checked 1 HTTP routes Route to non matching subset v1 for everything The warning includes the cause of the problem how many routes were checked and even gives you information about the other routes in place In this example no traffic arrives at the v2 pod because the route in the virtual service directs all traffic to the v1 subset If you now delete the Bookinfo destination rules kubectl delete f samples bookinfo networking destination rule all mtls yaml You can see another useful feature of istioctl describe istioctl x describe pod REVIEWS V1 POD VirtualService reviews WARNING No destinations match pod subsets checked 1 HTTP routes Warning Route to subset v1 but NO DESTINATION RULE defining subsets The output shows you that you deleted the destination rule but not the virtual service that depends on it The virtual service routes traffic to the v1 subset but there is no destination rule defining the v1 subset Thus traffic destined for version v1 can t flow to the pod If you refresh the browser to send a new request to Bookinfo at this point you would see the following message Error fetching product reviews To fix the problem reapply the destination rule kubectl apply f samples bookinfo networking destination rule all mtls yaml Reloading the browser shows the app working again and running istioctl experimental describe pod REVIEWS V1 POD no longer produces warnings Verifying traffic routes The istioctl describe command shows split traffic weights too For example run the following command to route 90 of traffic to the v1 subset and 10 to the v2 subset of the reviews service kubectl apply f samples bookinfo networking virtual service reviews 90 10 yaml Now describe the reviews v1 pod istioctl x describe pod REVIEWS V1 POD VirtualService reviews Weight 90 The output shows that the reviews virtual service has a weight of 90 for the v1 subset This function is also helpful for other types of routing For example you can deploy header specific routing kubectl apply f samples bookinfo networking virtual service reviews jason v2 v3 yaml Then describe the pod again istioctl x describe pod REVIEWS V1 POD VirtualService reviews WARNING No destinations match pod subsets checked 2 HTTP routes Route to non matching subset v2 for when headers are end user jason Route to non matching subset v3 for everything The output produces a warning since you are describing a pod in the v1 subset However the virtual service configuration you applied routes traffic to the v2 subset if the header contains end user jason and to the v3 subset in all other cases Verifying strict mutual TLS Following the mutual TLS migration docs tasks security authentication mtls migration instructions you can enable strict mutual TLS for the ratings service kubectl apply f EOF apiVersion security istio io v1 kind PeerAuthentication metadata name ratings strict spec selector matchLabels app ratings mtls mode STRICT EOF Run the following command to describe the ratings pod istioctl x describe pod RATINGS POD Pilot reports that pod enforces mTLS and clients speak mTLS The output reports that requests to the ratings pod are now locked down and secure Sometimes however a deployment breaks when switching mutual TLS to STRICT The likely cause is that the destination rule didn t match the new configuration For example if you configure the Bookinfo clients to not use mutual TLS using the plain HTTP destination rules samples bookinfo networking destination rule all yaml kubectl apply f samples bookinfo networking destination rule all yaml If you open Bookinfo in your browser you see Ratings service is currently unavailable To learn why run the following command istioctl x describe pod RATINGS POD WARNING Pilot predicts TLS Conflict on ratings v1 f745cf57b qrxl2 port 9080 pod enforces mTLS clients speak HTTP Check DestinationRule ratings default and AuthenticationPolicy ratings strict default The output includes a warning describing the conflict between the destination rule and the authentication policy You can restore correct behavior by applying a destination rule that uses mutual TLS kubectl apply f samples bookinfo networking destination rule all mtls yaml Conclusion and cleanup Our goal with the istioctl x describe command is to help you understand the traffic and security configurations in your Istio mesh We would love to hear your ideas for improvements Please join us at https discuss istio io https discuss istio io To remove the Bookinfo pods and configurations used in this guide run the following commands kubectl delete f samples bookinfo platform kube bookinfo yaml kubectl delete f samples bookinfo networking bookinfo gateway yaml kubectl delete f samples bookinfo networking destination rule all mtls yaml kubectl delete f samples bookinfo networking virtual service all v1 yaml |
istio docs ops troubleshooting istioctl test no Istio includes a supplemental tool that provides debugging and diagnosis for Istio service mesh deployments title Using the Istioctl Command line Tool aliases owner istio wg user experience maintainers weight 10 keywords istioctl bash zsh shell command line help ops component debugging | ---
title: Using the Istioctl Command-line Tool
description: Istio includes a supplemental tool that provides debugging and diagnosis for Istio service mesh deployments.
weight: 10
keywords: [istioctl,bash,zsh,shell,command-line]
aliases:
- /help/ops/component-debugging
- /docs/ops/troubleshooting/istioctl
owner: istio/wg-user-experience-maintainers
test: no
---
You can gain insights into what individual components are doing by inspecting their
[logs](/docs/ops/diagnostic-tools/component-logging/) or peering inside via
[introspection](/docs/ops/diagnostic-tools/controlz/). If that's insufficient,
the steps below explain how to get under the hood.
The [`istioctl`](/docs/reference/commands/istioctl) tool is a configuration command line utility
that allows service operators to debug and diagnose their Istio service mesh deployments.
The Istio project also includes two helpful scripts for `istioctl` that enable auto-completion
for Bash and Zsh. Both of these scripts provide support for the currently available `istioctl` commands.
`istioctl` only has auto-completion enabled for non-deprecated commands.
## Before you begin
We recommend you use an `istioctl` version that is the same version as your Istio control plane.
Using matching versions helps avoid unforeseen issues.
If you have already [downloaded the Istio release](/docs/setup/additional-setup/download-istio-release/), you should
already have `istioctl` and do not need to install it again.
## Install
Install the `istioctl` binary with `curl`:
1. Download the latest release with the command:
$ curl -sL https://istio.io/downloadIstioctl | sh -
1. Add the `istioctl` client to your path, on a macOS or Linux system:
$ export PATH=$HOME/.istioctl/bin:$PATH
1. You can optionally enable the [auto-completion option](#enabling-auto-completion) when working with a bash or Zsh console.
## Get an overview of your mesh
You can get an overview of your mesh using the `proxy-status` or `ps` command:
$ istioctl proxy-status
If a proxy is missing from the output list it means that it is not currently connected to an istiod instance and so it
will not receive any configuration. Additionally, if it is marked stale, it likely means there are networking issues or
istiod needs to be scaled.
## Get proxy configuration
[`istioctl`](/docs/reference/commands/istioctl) allows you to retrieve information
about proxy configuration using the `proxy-config` or `pc` command.
For example, to retrieve information about cluster configuration for the Envoy instance in a specific pod:
$ istioctl proxy-config cluster <pod-name> [flags]
To retrieve information about bootstrap configuration for the Envoy instance in a specific pod:
$ istioctl proxy-config bootstrap <pod-name> [flags]
To retrieve information about listener configuration for the Envoy instance in a specific pod:
$ istioctl proxy-config listener <pod-name> [flags]
To retrieve information about route configuration for the Envoy instance in a specific pod:
$ istioctl proxy-config route <pod-name> [flags]
To retrieve information about endpoint configuration for the Envoy instance in a specific pod:
$ istioctl proxy-config endpoints <pod-name> [flags]
See [Debugging Envoy and Istiod](/docs/ops/diagnostic-tools/proxy-cmd/) for more advice on interpreting this information.
## `istioctl` auto-completion
If you are using the macOS operating system with the Zsh terminal shell, make sure that
the `zsh-completions` package is installed. With the [brew](https://brew.sh) package manager
for macOS, you can check to see if the `zsh-completions` package is installed with the following command:
$ brew list zsh-completions
/usr/local/Cellar/zsh-completions/0.34.0/share/zsh-completions/ (147 files)
If you receive `Error: No such keg: /usr/local/Cellar/zsh-completion`,
proceed with installing the `zsh-completions` package with the following command:
$ brew install zsh-completions
Once the `zsh-completions package` has been installed on your macOS system, add the following to your `~/.zshrc` file:
if type brew &>/dev/null; then
FPATH=$(brew --prefix)/share/zsh-completions:$FPATH
autoload -Uz compinit
compinit
fi
You may also need to force rebuild `zcompdump`:
$ rm -f ~/.zcompdump; compinit
Additionally, if you receive `Zsh compinit: insecure directories` warnings
when attempting to load these completions, you may need to run this:
$ chmod -R go-w "$(brew --prefix)/share"
If you are using a Linux-based operating system, you can install the Bash completion package
with the `apt-get install bash-completion` command for Debian-based Linux distributions or
`yum install bash-completion` for RPM-based Linux distributions, the two most common occurrences.
Once the `bash-completion` package has been installed on your Linux system,
add the following line to your `~/.bash_profile` file:
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
### Enabling auto-completion
To enable `istioctl` completion on your system, follow the steps for your preferred shell:
You will need to download the full Istio release containing the auto-completion files (in the `/tools` directory).
If you haven't already done so, [download the full release](/docs/setup/additional-setup/download-istio-release/) now.
Installing the bash auto-completion file
If you are using bash, the `istioctl` auto-completion file is located in the `tools` directory.
To use it, copy the `istioctl.bash` file to your home directory, then add the following line to
source the `istioctl` tab completion file from your `.bashrc` file:
$ source ~/istioctl.bash
Installing the Zsh auto-completion file
For Zsh users, the `istioctl` auto-completion file is located in the `tools` directory.
Copy the `_istioctl` file to your home directory, or any directory of your choosing
(update directory in script snippet below), and source the `istioctl` auto-completion file
in your `.zshrc` file as follows:
source ~/_istioctl
You may also add the `_istioctl` file to a directory listed in the `fpath` variable.
To achieve this, place the `_istioctl` file in an existing directory in the `fpath`,
or create a new directory and add it to the `fpath` variable in your `~/.zshrc` file.
If you get an error like `complete:13: command not found: compdef`,
then add the following to the beginning of your `~/.zshrc` file:
$ autoload -Uz compinit
$ compinit
If your auto-completion is not working, try again after restarting your terminal.
If auto-completion still does not work, try resetting the completion cache using
the above commands in your terminal.
### Using auto-completion
If the `istioctl` completion file has been installed correctly, press the Tab key
while writing an `istioctl` command, and it should return a set of command suggestions
for you to choose from:
$ istioctl proxy-<TAB>
proxy-config proxy-status
| istio | title Using the Istioctl Command line Tool description Istio includes a supplemental tool that provides debugging and diagnosis for Istio service mesh deployments weight 10 keywords istioctl bash zsh shell command line aliases help ops component debugging docs ops troubleshooting istioctl owner istio wg user experience maintainers test no You can gain insights into what individual components are doing by inspecting their logs docs ops diagnostic tools component logging or peering inside via introspection docs ops diagnostic tools controlz If that s insufficient the steps below explain how to get under the hood The istioctl docs reference commands istioctl tool is a configuration command line utility that allows service operators to debug and diagnose their Istio service mesh deployments The Istio project also includes two helpful scripts for istioctl that enable auto completion for Bash and Zsh Both of these scripts provide support for the currently available istioctl commands istioctl only has auto completion enabled for non deprecated commands Before you begin We recommend you use an istioctl version that is the same version as your Istio control plane Using matching versions helps avoid unforeseen issues If you have already downloaded the Istio release docs setup additional setup download istio release you should already have istioctl and do not need to install it again Install Install the istioctl binary with curl 1 Download the latest release with the command curl sL https istio io downloadIstioctl sh 1 Add the istioctl client to your path on a macOS or Linux system export PATH HOME istioctl bin PATH 1 You can optionally enable the auto completion option enabling auto completion when working with a bash or Zsh console Get an overview of your mesh You can get an overview of your mesh using the proxy status or ps command istioctl proxy status If a proxy is missing from the output list it means that it is not currently connected to an istiod instance and so it will not receive any configuration Additionally if it is marked stale it likely means there are networking issues or istiod needs to be scaled Get proxy configuration istioctl docs reference commands istioctl allows you to retrieve information about proxy configuration using the proxy config or pc command For example to retrieve information about cluster configuration for the Envoy instance in a specific pod istioctl proxy config cluster pod name flags To retrieve information about bootstrap configuration for the Envoy instance in a specific pod istioctl proxy config bootstrap pod name flags To retrieve information about listener configuration for the Envoy instance in a specific pod istioctl proxy config listener pod name flags To retrieve information about route configuration for the Envoy instance in a specific pod istioctl proxy config route pod name flags To retrieve information about endpoint configuration for the Envoy instance in a specific pod istioctl proxy config endpoints pod name flags See Debugging Envoy and Istiod docs ops diagnostic tools proxy cmd for more advice on interpreting this information istioctl auto completion If you are using the macOS operating system with the Zsh terminal shell make sure that the zsh completions package is installed With the brew https brew sh package manager for macOS you can check to see if the zsh completions package is installed with the following command brew list zsh completions usr local Cellar zsh completions 0 34 0 share zsh completions 147 files If you receive Error No such keg usr local Cellar zsh completion proceed with installing the zsh completions package with the following command brew install zsh completions Once the zsh completions package has been installed on your macOS system add the following to your zshrc file if type brew dev null then FPATH brew prefix share zsh completions FPATH autoload Uz compinit compinit fi You may also need to force rebuild zcompdump rm f zcompdump compinit Additionally if you receive Zsh compinit insecure directories warnings when attempting to load these completions you may need to run this chmod R go w brew prefix share If you are using a Linux based operating system you can install the Bash completion package with the apt get install bash completion command for Debian based Linux distributions or yum install bash completion for RPM based Linux distributions the two most common occurrences Once the bash completion package has been installed on your Linux system add the following line to your bash profile file r usr local etc profile d bash completion sh usr local etc profile d bash completion sh Enabling auto completion To enable istioctl completion on your system follow the steps for your preferred shell You will need to download the full Istio release containing the auto completion files in the tools directory If you haven t already done so download the full release docs setup additional setup download istio release now Installing the bash auto completion file If you are using bash the istioctl auto completion file is located in the tools directory To use it copy the istioctl bash file to your home directory then add the following line to source the istioctl tab completion file from your bashrc file source istioctl bash Installing the Zsh auto completion file For Zsh users the istioctl auto completion file is located in the tools directory Copy the istioctl file to your home directory or any directory of your choosing update directory in script snippet below and source the istioctl auto completion file in your zshrc file as follows source istioctl You may also add the istioctl file to a directory listed in the fpath variable To achieve this place the istioctl file in an existing directory in the fpath or create a new directory and add it to the fpath variable in your zshrc file If you get an error like complete 13 command not found compdef then add the following to the beginning of your zshrc file autoload Uz compinit compinit If your auto completion is not working try again after restarting your terminal If auto completion still does not work try resetting the completion cache using the above commands in your terminal Using auto completion If the istioctl completion file has been installed correctly press the Tab key while writing an istioctl command and it should return a set of command suggestions for you to choose from istioctl proxy TAB proxy config proxy status |
istio title Troubleshooting Multicluster test no keywords debug multicluster multi network envoy weight 90 This page describes how to troubleshoot issues with Istio deployed to multiple clusters and or networks owner istio wg environments maintainers Describes tools and techniques to diagnose issues with multicluster and multi network installations | ---
title: Troubleshooting Multicluster
description: Describes tools and techniques to diagnose issues with multicluster and multi-network installations.
weight: 90
keywords: [debug,multicluster,multi-network,envoy]
owner: istio/wg-environments-maintainers
test: no
---
This page describes how to troubleshoot issues with Istio deployed to multiple clusters and/or networks.
Before reading this, you should take the steps in [Multicluster Installation](/docs/setup/install/multicluster/)
and read the [Deployment Models](/docs/ops/deployment/deployment-models/) guide.
## Cross-Cluster Load Balancing
The most common, but also broad problem with multi-network installations is that cross-cluster load balancing doesn’t work. Usually this manifests itself as only seeing responses from the cluster-local instance of a Service:
$ for i in $(seq 10); do kubectl --context=$CTX_CLUSTER1 -n sample exec curl-dd98b5f48-djwdw -c curl -- curl -s helloworld:5000/hello; done
Hello version: v1, instance: helloworld-v1-578dd69f69-j69pf
Hello version: v1, instance: helloworld-v1-578dd69f69-j69pf
Hello version: v1, instance: helloworld-v1-578dd69f69-j69pf
...
When following the guide to [verify multicluster installation](/docs/setup/install/multicluster/verify/)
we would expect both `v1` and `v2` responses, indicating traffic is going to both clusters.
There are many possible causes to the problem:
### Connectivity and firewall issues
In some environments it may not be apparent that a firewall is blocking traffic between your clusters. It's possible
that `ICMP` (ping) traffic may succeed, but HTTP and other types of traffic do not. This can appear as a timeout, or
in some cases a more confusing error such as:
upstream connect error or disconnect/reset before headers. reset reason: local reset, transport failure reason: TLS error: 268435612:SSL routines:OPENSSL_internal:HTTP_REQUEST
While Istio provides service discovery capabilities to make it easier, cross-cluster traffic should still succeed
if pods in each cluster are on a single network without Istio. To rule out issues with TLS/mTLS, you can do a manual
traffic test using pods without Istio sidecars.
In each cluster, create a new namespace for this test. Do _not_ enable sidecar injection:
$ kubectl create --context="${CTX_CLUSTER1}" namespace uninjected-sample
$ kubectl create --context="${CTX_CLUSTER2}" namespace uninjected-sample
Then deploy the same apps used in [verify multicluster installation](/docs/setup/install/multicluster/verify/):
$ kubectl apply --context="${CTX_CLUSTER1}" \
-f samples/helloworld/helloworld.yaml \
-l service=helloworld -n uninjected-sample
$ kubectl apply --context="${CTX_CLUSTER2}" \
-f samples/helloworld/helloworld.yaml \
-l service=helloworld -n uninjected-sample
$ kubectl apply --context="${CTX_CLUSTER1}" \
-f samples/helloworld/helloworld.yaml \
-l version=v1 -n uninjected-sample
$ kubectl apply --context="${CTX_CLUSTER2}" \
-f samples/helloworld/helloworld.yaml \
-l version=v2 -n uninjected-sample
$ kubectl apply --context="${CTX_CLUSTER1}" \
-f samples/curl/curl.yaml -n uninjected-sample
$ kubectl apply --context="${CTX_CLUSTER2}" \
-f samples/curl/curl.yaml -n uninjected-sample
Verify that there is a helloworld pod running in `cluster2`, using the `-o wide` flag, so we can get the Pod IP:
$ kubectl --context="${CTX_CLUSTER2}" -n uninjected-sample get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-557747455f-jdsd8 1/1 Running 0 41s 10.100.0.2 node-2 <none> <none>
helloworld-v2-54df5f84b-z28p5 1/1 Running 0 43s 10.100.0.1 node-1 <none> <none>
Take note of the `IP` column for `helloworld`. In this case, it is `10.100.0.1`:
$ REMOTE_POD_IP=10.100.0.1
Next, attempt to send traffic from the `curl` pod in `cluster1` directly to this Pod IP:
$ kubectl exec --context="${CTX_CLUSTER1}" -n uninjected-sample -c curl \
"$(kubectl get pod --context="${CTX_CLUSTER1}" -n uninjected-sample -l \
app=curl -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS $REMOTE_POD_IP:5000/hello
Hello version: v2, instance: helloworld-v2-54df5f84b-z28p5
If successful, there should be responses only from `helloworld-v2`. Repeat the steps, but send traffic from `cluster2`
to `cluster1`.
If this succeeds, you can rule out connectivity issues. If it does not, the cause of the problem may lie outside your
Istio configuration.
### Locality Load Balancing
[Locality load balancing](/docs/tasks/traffic-management/locality-load-balancing/failover/#configure-locality-failover)
can be used to make clients prefer that traffic go to the nearest destination. If the clusters
are in different localities (region/zone), locality load balancing will prefer the local-cluster and is working as
intended. If locality load balancing is disabled, or the clusters are in the same locality, there may be another issue.
### Trust Configuration
Cross-cluster traffic, as with intra-cluster traffic, relies on a common root of trust between the proxies. The default
Istio installation will use their own individually generated root certificate-authorities. For multi-cluster, we
must manually configure a shared root of trust. Follow Plug-in Certs below or read [Identity and Trust Models](/docs/ops/deployment/deployment-models/#identity-and-trust-models)
to learn more.
**Plug-in Certs:**
To verify certs are configured correctly, you can compare the root-cert in each cluster:
$ diff \
<(kubectl --context="${CTX_CLUSTER1}" -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\.pem}') \
<(kubectl --context="${CTX_CLUSTER2}" -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\.pem}')
If the root-certs do not match or the secret does not exist at all, you can follow the [Plugin CA Certs](/docs/tasks/security/cert-management/plugin-ca-cert/)
guide, ensuring to run the steps for every cluster.
### Step-by-step Diagnosis
If you've gone through the sections above and are still having issues, then it's time to dig a little deeper.
The following steps assume you're following the [HelloWorld verification](/docs/setup/install/multicluster/verify/).
Before continuing, make sure both `helloworld` and `curl` are deployed in each cluster.
From each cluster, find the endpoints the `curl` service has for `helloworld`:
$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld
Troubleshooting information differs based on the cluster that is the source of traffic:
$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld
10.0.0.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
Only one endpoint is shown, indicating the control plane cannot read endpoints from the remote cluster.
Verify that remote secrets are configured properly.
$ kubectl get secrets --context=$CTX_CLUSTER1 -n istio-system -l "istio/multiCluster=true"
* If the secret is missing, create it.
* If the secret is present:
* Look at the config in the secret. Make sure the cluster name is used as the data key for the remote `kubeconfig`.
* If the secret looks correct, check the logs of `istiod` for connectivity or permissions issues reaching the
remote Kubernetes API server. Log messages may include `Failed to add remote cluster from secret` along with an
error reason.
$ istioctl --context $CTX_CLUSTER2 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld
10.0.1.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
Only one endpoint is shown, indicating the control plane cannot read endpoints from the remote cluster.
Verify that remote secrets are configured properly.
$ kubectl get secrets --context=$CTX_CLUSTER1 -n istio-system -l "istio/multiCluster=true"
* If the secret is missing, create it.
* If the secret is present and the endpoint is a Pod in the **primary** cluster:
* Look at the config in the secret. Make sure the cluster name is used as the data key for the remote `kubeconfig`.
* If the secret looks correct, check the logs of `istiod` for connectivity or permissions issues reaching the
remote Kubernetes API server. Log messages may include `Failed to add remote cluster from secret` along with an
error reason.
* If the secret is present and the endpoint is a Pod in the **remote** cluster:
* The proxy is reading configuration from an istiod inside the remote cluster. When a remote cluster has an in
-cluster istiod, it is only meant for sidecar injection and CA. You can verify this is the problem by looking
for a Service named `istiod-remote` in the `istio-system` namespace. If it's missing, reinstall making sure
`values.global.remotePilotAddress` is set.
The steps for Primary and Remote clusters still apply for multi-network, although multi-network has an additional case:
$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld
10.0.5.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
10.0.6.13:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
In multi-network, we expect one of the endpoint IPs to match the remote cluster's east-west gateway public IP. Seeing
multiple Pod IPs indicates one of two things:
* The address of the gateway for the remote network cannot be determined.
* The network of either the client or server pod cannot be determined.
**The address of the gateway for the remote network cannot be determined:**
In the remote cluster that cannot be reached, check that the Service has an External IP:
$ kubectl -n istio-system get service -l "istio=eastwestgateway"
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-eastwestgateway LoadBalancer 10.8.17.119 <PENDING> 15021:31781/TCP,15443:30498/TCP,15012:30879/TCP,15017:30336/TCP 76m
If the `EXTERNAL-IP` is stuck in `<PENDING>`, the environment may not support `LoadBalancer` services. In this case, it
may be necessary to customize the `spec.externalIPs` section of the Service to manually give the Gateway an IP reachable
from outside the cluster.
If the external IP is present, check that the Service includes a `topology.istio.io/network` label with the correct
value. If that is incorrect, reinstall the gateway and make sure to set the --network flag on the generation script.
**The network of either the client or server cannot be determined.**
On the source pod, check the proxy metadata.
$ kubectl get pod $CURL_POD_NAME \
-o jsonpath="{.spec.containers[*].env[?(@.name=='ISTIO_META_NETWORK')].value}"
$ kubectl get pod $HELLOWORLD_POD_NAME \
-o jsonpath="{.metadata.labels.topology\.istio\.io/network}"
If either of these values aren't set, or have the wrong value, istiod may treat the source and client proxies as being on the same network and send network-local endpoints.
When these aren't set, check that `values.global.network` was set properly during install, or that the injection webhook is configured correctly.
Istio determines the network of a Pod using the `topology.istio.io/network` label which is set during injection. For
non-injected Pods, Istio relies on the `topology.istio.io/network` label set on the system namespace in the cluster.
In each cluster, check the network:
$ kubectl --context="${CTX_CLUSTER1}" get ns istio-system -ojsonpath='{.metadata.labels.topology\.istio\.io/network}'
If the above command doesn't output the expected network name, set the label:
$ kubectl --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
| istio | title Troubleshooting Multicluster description Describes tools and techniques to diagnose issues with multicluster and multi network installations weight 90 keywords debug multicluster multi network envoy owner istio wg environments maintainers test no This page describes how to troubleshoot issues with Istio deployed to multiple clusters and or networks Before reading this you should take the steps in Multicluster Installation docs setup install multicluster and read the Deployment Models docs ops deployment deployment models guide Cross Cluster Load Balancing The most common but also broad problem with multi network installations is that cross cluster load balancing doesn t work Usually this manifests itself as only seeing responses from the cluster local instance of a Service for i in seq 10 do kubectl context CTX CLUSTER1 n sample exec curl dd98b5f48 djwdw c curl curl s helloworld 5000 hello done Hello version v1 instance helloworld v1 578dd69f69 j69pf Hello version v1 instance helloworld v1 578dd69f69 j69pf Hello version v1 instance helloworld v1 578dd69f69 j69pf When following the guide to verify multicluster installation docs setup install multicluster verify we would expect both v1 and v2 responses indicating traffic is going to both clusters There are many possible causes to the problem Connectivity and firewall issues In some environments it may not be apparent that a firewall is blocking traffic between your clusters It s possible that ICMP ping traffic may succeed but HTTP and other types of traffic do not This can appear as a timeout or in some cases a more confusing error such as upstream connect error or disconnect reset before headers reset reason local reset transport failure reason TLS error 268435612 SSL routines OPENSSL internal HTTP REQUEST While Istio provides service discovery capabilities to make it easier cross cluster traffic should still succeed if pods in each cluster are on a single network without Istio To rule out issues with TLS mTLS you can do a manual traffic test using pods without Istio sidecars In each cluster create a new namespace for this test Do not enable sidecar injection kubectl create context CTX CLUSTER1 namespace uninjected sample kubectl create context CTX CLUSTER2 namespace uninjected sample Then deploy the same apps used in verify multicluster installation docs setup install multicluster verify kubectl apply context CTX CLUSTER1 f samples helloworld helloworld yaml l service helloworld n uninjected sample kubectl apply context CTX CLUSTER2 f samples helloworld helloworld yaml l service helloworld n uninjected sample kubectl apply context CTX CLUSTER1 f samples helloworld helloworld yaml l version v1 n uninjected sample kubectl apply context CTX CLUSTER2 f samples helloworld helloworld yaml l version v2 n uninjected sample kubectl apply context CTX CLUSTER1 f samples curl curl yaml n uninjected sample kubectl apply context CTX CLUSTER2 f samples curl curl yaml n uninjected sample Verify that there is a helloworld pod running in cluster2 using the o wide flag so we can get the Pod IP kubectl context CTX CLUSTER2 n uninjected sample get pod o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES curl 557747455f jdsd8 1 1 Running 0 41s 10 100 0 2 node 2 none none helloworld v2 54df5f84b z28p5 1 1 Running 0 43s 10 100 0 1 node 1 none none Take note of the IP column for helloworld In this case it is 10 100 0 1 REMOTE POD IP 10 100 0 1 Next attempt to send traffic from the curl pod in cluster1 directly to this Pod IP kubectl exec context CTX CLUSTER1 n uninjected sample c curl kubectl get pod context CTX CLUSTER1 n uninjected sample l app curl o jsonpath items 0 metadata name curl sS REMOTE POD IP 5000 hello Hello version v2 instance helloworld v2 54df5f84b z28p5 If successful there should be responses only from helloworld v2 Repeat the steps but send traffic from cluster2 to cluster1 If this succeeds you can rule out connectivity issues If it does not the cause of the problem may lie outside your Istio configuration Locality Load Balancing Locality load balancing docs tasks traffic management locality load balancing failover configure locality failover can be used to make clients prefer that traffic go to the nearest destination If the clusters are in different localities region zone locality load balancing will prefer the local cluster and is working as intended If locality load balancing is disabled or the clusters are in the same locality there may be another issue Trust Configuration Cross cluster traffic as with intra cluster traffic relies on a common root of trust between the proxies The default Istio installation will use their own individually generated root certificate authorities For multi cluster we must manually configure a shared root of trust Follow Plug in Certs below or read Identity and Trust Models docs ops deployment deployment models identity and trust models to learn more Plug in Certs To verify certs are configured correctly you can compare the root cert in each cluster diff kubectl context CTX CLUSTER1 n istio system get secret cacerts ojsonpath data root cert pem kubectl context CTX CLUSTER2 n istio system get secret cacerts ojsonpath data root cert pem If the root certs do not match or the secret does not exist at all you can follow the Plugin CA Certs docs tasks security cert management plugin ca cert guide ensuring to run the steps for every cluster Step by step Diagnosis If you ve gone through the sections above and are still having issues then it s time to dig a little deeper The following steps assume you re following the HelloWorld verification docs setup install multicluster verify Before continuing make sure both helloworld and curl are deployed in each cluster From each cluster find the endpoints the curl service has for helloworld istioctl context CTX CLUSTER1 proxy config endpoint curl dd98b5f48 djwdw sample grep helloworld Troubleshooting information differs based on the cluster that is the source of traffic istioctl context CTX CLUSTER1 proxy config endpoint curl dd98b5f48 djwdw sample grep helloworld 10 0 0 11 5000 HEALTHY OK outbound 5000 helloworld sample svc cluster local Only one endpoint is shown indicating the control plane cannot read endpoints from the remote cluster Verify that remote secrets are configured properly kubectl get secrets context CTX CLUSTER1 n istio system l istio multiCluster true If the secret is missing create it If the secret is present Look at the config in the secret Make sure the cluster name is used as the data key for the remote kubeconfig If the secret looks correct check the logs of istiod for connectivity or permissions issues reaching the remote Kubernetes API server Log messages may include Failed to add remote cluster from secret along with an error reason istioctl context CTX CLUSTER2 proxy config endpoint curl dd98b5f48 djwdw sample grep helloworld 10 0 1 11 5000 HEALTHY OK outbound 5000 helloworld sample svc cluster local Only one endpoint is shown indicating the control plane cannot read endpoints from the remote cluster Verify that remote secrets are configured properly kubectl get secrets context CTX CLUSTER1 n istio system l istio multiCluster true If the secret is missing create it If the secret is present and the endpoint is a Pod in the primary cluster Look at the config in the secret Make sure the cluster name is used as the data key for the remote kubeconfig If the secret looks correct check the logs of istiod for connectivity or permissions issues reaching the remote Kubernetes API server Log messages may include Failed to add remote cluster from secret along with an error reason If the secret is present and the endpoint is a Pod in the remote cluster The proxy is reading configuration from an istiod inside the remote cluster When a remote cluster has an in cluster istiod it is only meant for sidecar injection and CA You can verify this is the problem by looking for a Service named istiod remote in the istio system namespace If it s missing reinstall making sure values global remotePilotAddress is set The steps for Primary and Remote clusters still apply for multi network although multi network has an additional case istioctl context CTX CLUSTER1 proxy config endpoint curl dd98b5f48 djwdw sample grep helloworld 10 0 5 11 5000 HEALTHY OK outbound 5000 helloworld sample svc cluster local 10 0 6 13 5000 HEALTHY OK outbound 5000 helloworld sample svc cluster local In multi network we expect one of the endpoint IPs to match the remote cluster s east west gateway public IP Seeing multiple Pod IPs indicates one of two things The address of the gateway for the remote network cannot be determined The network of either the client or server pod cannot be determined The address of the gateway for the remote network cannot be determined In the remote cluster that cannot be reached check that the Service has an External IP kubectl n istio system get service l istio eastwestgateway NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE istio eastwestgateway LoadBalancer 10 8 17 119 PENDING 15021 31781 TCP 15443 30498 TCP 15012 30879 TCP 15017 30336 TCP 76m If the EXTERNAL IP is stuck in PENDING the environment may not support LoadBalancer services In this case it may be necessary to customize the spec externalIPs section of the Service to manually give the Gateway an IP reachable from outside the cluster If the external IP is present check that the Service includes a topology istio io network label with the correct value If that is incorrect reinstall the gateway and make sure to set the network flag on the generation script The network of either the client or server cannot be determined On the source pod check the proxy metadata kubectl get pod CURL POD NAME o jsonpath spec containers env name ISTIO META NETWORK value kubectl get pod HELLOWORLD POD NAME o jsonpath metadata labels topology istio io network If either of these values aren t set or have the wrong value istiod may treat the source and client proxies as being on the same network and send network local endpoints When these aren t set check that values global network was set properly during install or that the injection webhook is configured correctly Istio determines the network of a Pod using the topology istio io network label which is set during injection For non injected Pods Istio relies on the topology istio io network label set on the system namespace in the cluster In each cluster check the network kubectl context CTX CLUSTER1 get ns istio system ojsonpath metadata labels topology istio io network If the above command doesn t output the expected network name set the label kubectl context CTX CLUSTER1 label namespace istio system topology istio io network network1 |
istio keywords debug virtual machines envoy weight 80 This page describes how to troubleshoot issues with Istio deployed to Virtual Machines title Debugging Virtual Machines Describes tools and techniques to diagnose issues with Virtual Machines owner istio wg environments maintainers test n a | ---
title: Debugging Virtual Machines
description: Describes tools and techniques to diagnose issues with Virtual Machines.
weight: 80
keywords: [debug,virtual-machines,envoy]
owner: istio/wg-environments-maintainers
test: n/a
---
This page describes how to troubleshoot issues with Istio deployed to Virtual Machines.
Before reading this, you should take the steps in [Virtual Machine Installation](/docs/setup/install/virtual-machine/).
Additionally, [Virtual Machine Architecture](/docs/ops/deployment/vm-architecture/) can help you understand how the components interact.
Troubleshooting an Istio Virtual Machine installation is similar to troubleshooting issues with proxies running inside Kubernetes, but there are some key differences to be aware of.
While much of the same information is available on both platforms, accessing this information differs.
## Monitoring health
The Istio sidecar is typically run as a `systemd` unit. To ensure its running properly, you can check that status:
$ systemctl status istio
Additionally, the sidecar health can be programmatically check at its health endpoint:
$ curl localhost:15021/healthz/ready -I
## Logs
Logs for the Istio proxy can be found in a few places.
To access the `systemd` logs, which has details about the initialization of the proxy:
$ journalctl -f -u istio -n 1000
The proxy will redirect `stderr` and `stdout` to `/var/log/istio/istio.err.log` and `/var/log/istio/istio.log`, respectively.
To view these in a format similar to `kubectl`:
$ tail /var/log/istio/istio.err.log /var/log/istio/istio.log -Fq -n 100
Log levels can be modified by changing the `cluster.env` configuration file. Make sure to restart `istio` if it is already running:
$ echo "ISTIO_AGENT_FLAGS=\"--log_output_level=dns:debug --proxyLogLevel=debug\"" >> /var/lib/istio/envoy/cluster.env
$ systemctl restart istio
## Iptables
To ensure `iptables` rules have been successfully applied:
$ sudo iptables-save
...
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
## Istioctl
Most `istioctl` commands will function properly with virtual machines. For example, `istioctl proxy-status` can be used to view all connected proxies:
$ istioctl proxy-status
NAME CDS LDS EDS RDS ISTIOD VERSION
vm-1.default SYNCED SYNCED SYNCED SYNCED istiod-789ffff8-f2fkt
However, `istioctl proxy-config` relies on functionality in Kubernetes to connect to a proxy, which will not work for virtual machines.
Instead, a file containing the configuration dump from Envoy can be passed. For example:
$ curl -s localhost:15000/config_dump | istioctl proxy-config clusters --file -
SERVICE FQDN PORT SUBSET DIRECTION TYPE
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
## Automatic registration
When a virtual machine connects to Istiod, a `WorkloadEntry` will automatically be created. This enables
the virtual machine to become a part of a `Service`, similar to an `Endpoint` in Kubernetes.
To check these are created correctly:
$ kubectl get workloadentries
NAME AGE ADDRESS
vm-10.128.0.50 14m 10.128.0.50
## Certificates
Virtual machines handle certificates differently than Kubernetes Pods, which use a Kubernetes-provided service account token
to authenticate and renew mTLS certificates. Instead, existing mTLS credentials are used to authenticate with the certificate authority and
renew certificates.
The status of these certificates can be viewed in the same way as in Kubernetes:
$ curl -s localhost:15000/config_dump | ./istioctl proxy-config secret --file -
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
default Cert Chain ACTIVE true 251932493344649542420616421203546836446 2021-01-29T18:07:21Z 2021-01-28T18:07:21Z
ROOTCA CA ACTIVE true 81663936513052336343895977765039160718 2031-01-26T17:54:44Z 2021-01-28T17:54:44Z
Additionally, these are persisted to disk to ensure downtime or restarts do not lose state.
$ ls /etc/certs
cert-chain.pem key.pem root-cert.pem
| istio | title Debugging Virtual Machines description Describes tools and techniques to diagnose issues with Virtual Machines weight 80 keywords debug virtual machines envoy owner istio wg environments maintainers test n a This page describes how to troubleshoot issues with Istio deployed to Virtual Machines Before reading this you should take the steps in Virtual Machine Installation docs setup install virtual machine Additionally Virtual Machine Architecture docs ops deployment vm architecture can help you understand how the components interact Troubleshooting an Istio Virtual Machine installation is similar to troubleshooting issues with proxies running inside Kubernetes but there are some key differences to be aware of While much of the same information is available on both platforms accessing this information differs Monitoring health The Istio sidecar is typically run as a systemd unit To ensure its running properly you can check that status systemctl status istio Additionally the sidecar health can be programmatically check at its health endpoint curl localhost 15021 healthz ready I Logs Logs for the Istio proxy can be found in a few places To access the systemd logs which has details about the initialization of the proxy journalctl f u istio n 1000 The proxy will redirect stderr and stdout to var log istio istio err log and var log istio istio log respectively To view these in a format similar to kubectl tail var log istio istio err log var log istio istio log Fq n 100 Log levels can be modified by changing the cluster env configuration file Make sure to restart istio if it is already running echo ISTIO AGENT FLAGS log output level dns debug proxyLogLevel debug var lib istio envoy cluster env systemctl restart istio Iptables To ensure iptables rules have been successfully applied sudo iptables save A ISTIO OUTPUT d 127 0 0 1 32 j RETURN A ISTIO OUTPUT j ISTIO REDIRECT Istioctl Most istioctl commands will function properly with virtual machines For example istioctl proxy status can be used to view all connected proxies istioctl proxy status NAME CDS LDS EDS RDS ISTIOD VERSION vm 1 default SYNCED SYNCED SYNCED SYNCED istiod 789ffff8 f2fkt However istioctl proxy config relies on functionality in Kubernetes to connect to a proxy which will not work for virtual machines Instead a file containing the configuration dump from Envoy can be passed For example curl s localhost 15000 config dump istioctl proxy config clusters file SERVICE FQDN PORT SUBSET DIRECTION TYPE istiod istio system svc cluster local 443 outbound EDS istiod istio system svc cluster local 15010 outbound EDS istiod istio system svc cluster local 15012 outbound EDS istiod istio system svc cluster local 15014 outbound EDS Automatic registration When a virtual machine connects to Istiod a WorkloadEntry will automatically be created This enables the virtual machine to become a part of a Service similar to an Endpoint in Kubernetes To check these are created correctly kubectl get workloadentries NAME AGE ADDRESS vm 10 128 0 50 14m 10 128 0 50 Certificates Virtual machines handle certificates differently than Kubernetes Pods which use a Kubernetes provided service account token to authenticate and renew mTLS certificates Instead existing mTLS credentials are used to authenticate with the certificate authority and renew certificates The status of these certificates can be viewed in the same way as in Kubernetes curl s localhost 15000 config dump istioctl proxy config secret file RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE default Cert Chain ACTIVE true 251932493344649542420616421203546836446 2021 01 29T18 07 21Z 2021 01 28T18 07 21Z ROOTCA CA ACTIVE true 81663936513052336343895977765039160718 2031 01 26T17 54 44Z 2021 01 28T17 54 44Z Additionally these are persisted to disk to ensure downtime or restarts do not lose state ls etc certs cert chain pem key pem root cert pem |
istio weight 31 How to configure Istio to integrate with SPIRE to get cryptographic identities through Envoy s SDS API title SPIRE owner istio wg networking maintainers aliases test yes keywords kubernetes spiffe spire | ---
title: SPIRE
description: How to configure Istio to integrate with SPIRE to get cryptographic identities through Envoy's SDS API.
weight: 31
keywords: [kubernetes,spiffe,spire]
aliases:
owner: istio/wg-networking-maintainers
test: yes
---
[SPIRE](https://spiffe.io/docs/latest/spire-about/spire-concepts/) is a production-ready implementation of the SPIFFE specification that performs node and workload attestation in order to securely
issue cryptographic identities to workloads running in heterogeneous environments. SPIRE can be configured as a source of cryptographic identities for Istio workloads through an integration with
[Envoy's SDS API](https://www.envoyproxy.io/docs/envoy/latest/configuration/security/secret). Istio can detect the existence of a UNIX Domain Socket that implements the Envoy SDS API on a defined
socket path, allowing Envoy to communicate and fetch identities directly from it.
This integration with SPIRE provides flexible attestation options not available with the default Istio identity management while harnessing Istio's powerful service management.
For example, SPIRE's plugin architecture enables diverse workload attestation options beyond the Kubernetes namespace and service account attestation offered by Istio.
SPIRE's node attestation extends attestation to the physical or virtual hardware on which workloads run.
For a quick demo of how this SPIRE integration with Istio works, see [Integrating SPIRE as a CA through Envoy's SDS API](/samples/security/spire).
## Install SPIRE
We recommend you follow SPIRE's installation instructions and best practices for installing SPIRE, and for deploying SPIRE in production environments.
For the examples in this guide, the [SPIRE Helm charts](https://artifacthub.io/packages/helm/spiffe/spire) will be used with upstream defaults, to focus on just the configuration necessary to integrate SPIRE and Istio.
$ helm upgrade --install -n spire-server spire-crds spire-crds --repo https://spiffe.github.io/helm-charts-hardened/ --create-namespace
$ helm upgrade --install -n spire-server spire spire --repo https://spiffe.github.io/helm-charts-hardened/ --wait --set global.spire.trustDomain="example.org"
See the [SPIRE Helm chart](https://artifacthub.io/packages/helm/spiffe/spire) documentation for other values you can configure for your installation.
It is important that SPIRE and Istio are configured with the exact same trust domain, to prevent authentication and authorization errors, and that the [SPIFFE CSI driver](https://github.com/spiffe/spiffe-csi) is enabled and installed.
By default, the above will also install:
- The [SPIFFE CSI driver](https://github.com/spiffe/spiffe-csi), which is used to mount an Envoy-compatible SDS socket into proxies. Using the SPIFFE CSI driver to mount SDS sockets is strongly recommended by both Istio and SPIRE, as `hostMounts` are a larger security risk and introduce operational hurdles. This guide assumes the use of the SPIFFE CSI driver.
- The [SPIRE Controller Manager](https://github.com/spiffe/spire-controller-manager), which eases the creation of SPIFFE registrations for workloads.
## Register workloads
By design, SPIRE only grants identities to workloads that have been registered with the SPIRE server; this includes user workloads, as well as Istio components. Istio sidecars and gateways, once configured for SPIRE integration, cannot get identities, and therefore cannot reach READY status, unless there is a preexisting, matching SPIRE registration created for them ahead of time.
See the [SPIRE docs on registering workloads](https://spiffe.io/docs/latest/deploying/registering/) for more information on using multiple selectors to strengthen attestation criteria, and the selectors available.
This section describes the options available for registering Istio workloads in a SPIRE Server and provides some example workload registrations.
Istio currently requires a specific SPIFFE ID format for workloads. All registrations must follow the Istio SPIFFE ID pattern: `spiffe://<trust.domain>/ns/<namespace>/sa/<service-account>`
### Option 1: Auto-registration using the SPIRE Controller Manager
New entries will be automatically registered for each new pod that matches the selector defined in a [ClusterSPIFFEID](https://github.com/spiffe/spire-controller-manager/blob/main/docs/clusterspiffeid-crd.md) custom resource.
Both Istio sidecars and Istio gateways need to be registered with SPIRE, so that they can request identities.
#### Istio Gateway `ClusterSPIFFEID`
The following will create a `ClusterSPIFFEID`, which will auto-register any Istio Ingress gateway pod with SPIRE if it is scheduled into the `istio-system` namespace, and has a service account named `istio-ingressgateway-service-account`. These selectors are used as a simple example; consult the [SPIRE Controller Manager documentation](https://github.com/spiffe/spire-controller-manager/blob/main/docs/clusterspiffeid-crd.md) for more details.
$ kubectl apply -f - <<EOF
apiVersion: spire.spiffe.io/v1alpha1
kind: ClusterSPIFFEID
metadata:
name: istio-ingressgateway-reg
spec:
spiffeIDTemplate: "spiffe:///ns//sa/"
workloadSelectorTemplates:
- "k8s:ns:istio-system"
- "k8s:sa:istio-ingressgateway-service-account"
EOF
#### Istio Sidecar `ClusterSPIFFEID`
The following will create a `ClusterSPIFFEID` which will auto-register any pod with the `spiffe.io/spire-managed-identity: true` label that is deployed into the `default` namespace with SPIRE. These selectors are used as a simple example; consult the [SPIRE Controller Manager documentation](https://github.com/spiffe/spire-controller-manager/blob/main/docs/clusterspiffeid-crd.md) for more details.
$ kubectl apply -f - <<EOF
apiVersion: spire.spiffe.io/v1alpha1
kind: ClusterSPIFFEID
metadata:
name: istio-sidecar-reg
spec:
spiffeIDTemplate: "spiffe:///ns//sa/"
podSelector:
matchLabels:
spiffe.io/spire-managed-identity: "true"
workloadSelectorTemplates:
- "k8s:ns:default"
EOF
### Option 2: Manual Registration
If you wish to manually create your SPIRE registrations, rather than use the SPIRE Controller Manager mentioned in [the recommended option](#option-1-auto-registration-using-the-spire-controller-manager), refer to the [SPIRE documentation on manual registration](https://spiffe.io/docs/latest/deploying/registering/).
Below are the equivalent manual registrations based off the automatic registrations in [Option 1](#option-1-auto-registration-using-the-spire-controller-manager). The following steps assume you have [already followed the SPIRE documentation to manually register your SPIRE agent and node attestation](https://spiffe.io/docs/latest/deploying/registering/#1-defining-the-spiffe-id-of-the-agent) and that your SPIRE agent was registered with the SPIFFE identity `spiffe://example.org/ns/spire/sa/spire-agent`.
1. Get the `spire-server` pod:
$ SPIRE_SERVER_POD=$(kubectl get pod -l statefulset.kubernetes.io/pod-name=spire-server-0 -n spire-server -o jsonpath="{.items[0].metadata.name}")
1. Register an entry for the Istio Ingress gateway pod:
$ kubectl exec -n spire "$SPIRE_SERVER_POD" -- \
/opt/spire/bin/spire-server entry create \
-spiffeID spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account \
-parentID spiffe://example.org/ns/spire/sa/spire-agent \
-selector k8s:sa:istio-ingressgateway-service-account \
-selector k8s:ns:istio-system \
-socketPath /run/spire/sockets/server.sock
Entry ID : 6f2fe370-5261-4361-ac36-10aae8d91ff7
SPIFFE ID : spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account
Parent ID : spiffe://example.org/ns/spire/sa/spire-agent
Revision : 0
TTL : default
Selector : k8s:ns:istio-system
Selector : k8s:sa:istio-ingressgateway-service-account
1. Register an entry for workloads injected with an Istio sidecar:
$ kubectl exec -n spire "$SPIRE_SERVER_POD" -- \
/opt/spire/bin/spire-server entry create \
-spiffeID spiffe://example.org/ns/default/sa/curl \
-parentID spiffe://example.org/ns/spire/sa/spire-agent \
-selector k8s:ns:default \
-selector k8s:pod-label:spiffe.io/spire-managed-identity:true \
-socketPath /run/spire/sockets/server.sock
## Install Istio
1. [Download the Istio release](/docs/setup/additional-setup/download-istio-release/).
1. Create the Istio configuration with custom patches for the Ingress Gateway and `istio-proxy`. The Ingress Gateway component includes the `spiffe.io/spire-managed-identity: "true"` label.
$ cat <<EOF > ./istio.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
spec:
profile: default
meshConfig:
trustDomain: example.org
values:
# This is used to customize the sidecar template.
# It adds both the label to indicate that SPIRE should manage the
# identity of this pod, as well as the CSI driver mounts.
sidecarInjectorWebhook:
templates:
spire: |
labels:
spiffe.io/spire-managed-identity: "true"
spec:
containers:
- name: istio-proxy
volumeMounts:
- name: workload-socket
mountPath: /run/secrets/workload-spiffe-uds
readOnly: true
volumes:
- name: workload-socket
csi:
driver: "csi.spiffe.io"
readOnly: true
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
label:
istio: ingressgateway
k8s:
overlays:
# This is used to customize the ingress gateway template.
# It adds the CSI driver mounts, as well as an init container
# to stall gateway startup until the CSI driver mounts the socket.
- apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway
patches:
- path: spec.template.spec.volumes.[name:workload-socket]
value:
name: workload-socket
csi:
driver: "csi.spiffe.io"
readOnly: true
- path: spec.template.spec.containers.[name:istio-proxy].volumeMounts.[name:workload-socket]
value:
name: workload-socket
mountPath: "/run/secrets/workload-spiffe-uds"
readOnly: true
- path: spec.template.spec.initContainers
value:
- name: wait-for-spire-socket
image: busybox:1.36
volumeMounts:
- name: workload-socket
mountPath: /run/secrets/workload-spiffe-uds
readOnly: true
env:
- name: CHECK_FILE
value: /run/secrets/workload-spiffe-uds/socket
command:
- sh
- "-c"
- |-
echo "$(date -Iseconds)" Waiting for: ${CHECK_FILE}
while [[ ! -e ${CHECK_FILE} ]] ; do
echo "$(date -Iseconds)" File does not exist: ${CHECK_FILE}
sleep 15
done
ls -l ${CHECK_FILE}
EOF
1. Apply the configuration:
$ istioctl install --skip-confirmation -f ./istio.yaml
1. Check Ingress Gateway pod state:
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-5b45864fd4-lgrxs 1/1 Running 0 17s
istiod-989f54d9c-sg7sn 1/1 Running 0 23s
The Ingress Gateway pod is `Ready` since the corresponding registration entry is automatically created for it on the SPIRE Server. Envoy is able to fetch cryptographic identities from SPIRE.
This configuration also adds an `initContainer` to the gateway that will wait for SPIRE to create the UNIX Domain Socket before starting the `istio-proxy`. If the SPIRE agent is not ready, or has not been properly configured with the same socket path, the Ingress Gateway `initContainer` will wait forever.
1. Deploy an example workload:
$ istioctl kube-inject --filename @samples/security/spire/curl-spire.yaml@ | kubectl apply -f -
In addition to needing `spiffe.io/spire-managed-identity` label, the workload will need the SPIFFE CSI Driver volume to access the SPIRE Agent socket. To accomplish this,
you can leverage the `spire` pod annotation template from the [Install Istio](#install-istio) section or add the CSI volume to
the deployment spec of your workload. Both of these alternatives are highlighted on the example snippet below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: curl
spec:
replicas: 1
selector:
matchLabels:
app: curl
template:
metadata:
labels:
app: curl
# Injects custom sidecar template
annotations:
inject.istio.io/templates: "sidecar,spire"
spec:
terminationGracePeriodSeconds: 0
serviceAccountName: curl
containers:
- name: curl
image: curlimages/curl
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
volumeMounts:
- name: tmp
mountPath: /tmp
securityContext:
runAsUser: 1000
volumes:
- name: tmp
emptyDir: {}
# CSI volume
- name: workload-socket
csi:
driver: "csi.spiffe.io"
readOnly: true
The Istio configuration shares the `spiffe-csi-driver` with the Ingress Gateway and the sidecars that are going to be injected on workload pods, granting them access to the SPIRE Agent's UNIX Domain Socket.
See [Verifying that identities were created for workloads](#verifying-that-identities-were-created-for-workloads)
to check issued identities.
## Verifying that identities were created for workloads
Use the following command to confirm that identities were created for the workloads:
$ kubectl exec -t "$SPIRE_SERVER_POD" -n spire-server -c spire-server -- ./bin/spire-server entry show
Found 2 entries
Entry ID : c8dfccdc-9762-4762-80d3-5434e5388ae7
SPIFFE ID : spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account
Parent ID : spiffe://example.org/spire/agent/k8s_psat/demo-cluster/bea19580-ae04-4679-a22e-472e18ca4687
Revision : 0
X509-SVID TTL : default
JWT-SVID TTL : default
Selector : k8s:pod-uid:88b71387-4641-4d9c-9a89-989c88f7509d
Entry ID : af7b53dc-4cc9-40d3-aaeb-08abbddd8e54
SPIFFE ID : spiffe://example.org/ns/default/sa/curl
Parent ID : spiffe://example.org/spire/agent/k8s_psat/demo-cluster/bea19580-ae04-4679-a22e-472e18ca4687
Revision : 0
X509-SVID TTL : default
JWT-SVID TTL : default
Selector : k8s:pod-uid:ee490447-e502-46bd-8532-5a746b0871d6
Check the Ingress-gateway pod state:
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-5b45864fd4-lgrxs 1/1 Running 0 60s
istiod-989f54d9c-sg7sn 1/1 Running 0 45s
After registering an entry for the Ingress-gateway pod, Envoy receives the identity issued by SPIRE and uses it for all TLS and mTLS communications.
### Check that the workload identity was issued by SPIRE
1. Get pod information:
$ CURL_POD=$(kubectl get pod -l app=curl -o jsonpath="{.items[0].metadata.name}")
1. Retrieve curl's SVID identity document using the istioctl proxy-config secret command:
$ istioctl proxy-config secret "$CURL_POD" -o json | jq -r \
'.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode > chain.pem
1. Inspect the certificate and verify that SPIRE was the issuer:
$ openssl x509 -in chain.pem -text | grep SPIRE
Subject: C = US, O = SPIRE, CN = curl-5f4d47c948-njvpk
## SPIFFE federation
SPIRE Servers are able to authenticate SPIFFE identities originating from different trust domains. This is known as SPIFFE federation.
SPIRE Agent can be configured to push federated bundles to Envoy through the Envoy SDS API, allowing Envoy to use [validation context](https://spiffe.io/docs/latest/microservices/envoy/#validation-context)
to verify peer certificates and trust a workload from another trust domain.
To enable Istio to federate SPIFFE identities through SPIRE integration, consult [SPIRE Agent SDS configuration](https://github.com/spiffe/spire/blob/main/doc/spire_agent.md#sds-configuration) and set the following
SDS configuration values for your SPIRE Agent configuration file.
| Configuration | Description | Resource Name |
|----------------------------|--------------------------------------------------------------------------------------------------|---------------|
| `default_svid_name` | The TLS Certificate resource name to use for the default `X509-SVID` with Envoy SDS | default |
| `default_bundle_name` | The Validation Context resource name to use for the default X.509 bundle with Envoy SDS | null |
| `default_all_bundles_name` | The Validation Context resource name to use for all bundles (including federated) with Envoy SDS | ROOTCA |
This will allow Envoy to get federated bundles directly from SPIRE.
### Create federated registration entries
- If using the SPIRE Controller Manager, create federated entries for workloads by setting the `federatesWith` field of the [ClusterSPIFFEID CR](https://github.com/spiffe/spire-controller-manager/blob/main/docs/clusterspiffeid-crd.md) to the trust domains you want the pod to federate with:
apiVersion: spire.spiffe.io/v1alpha1
kind: ClusterSPIFFEID
metadata:
name: federation
spec:
spiffeIDTemplate: "spiffe:///ns//sa/"
podSelector:
matchLabels:
spiffe.io/spire-managed-identity: "true"
federatesWith: ["example.io", "example.ai"]
- For manual registration see [Create Registration Entries for Federation](https://spiffe.io/docs/latest/architecture/federation/readme/#create-registration-entries-for-federation).
## Cleanup SPIRE
Remove SPIRE by uninstalling its Helm charts:
$ helm delete -n spire-server spire
$ helm delete -n spire-server spire-crds
| istio | title SPIRE description How to configure Istio to integrate with SPIRE to get cryptographic identities through Envoy s SDS API weight 31 keywords kubernetes spiffe spire aliases owner istio wg networking maintainers test yes SPIRE https spiffe io docs latest spire about spire concepts is a production ready implementation of the SPIFFE specification that performs node and workload attestation in order to securely issue cryptographic identities to workloads running in heterogeneous environments SPIRE can be configured as a source of cryptographic identities for Istio workloads through an integration with Envoy s SDS API https www envoyproxy io docs envoy latest configuration security secret Istio can detect the existence of a UNIX Domain Socket that implements the Envoy SDS API on a defined socket path allowing Envoy to communicate and fetch identities directly from it This integration with SPIRE provides flexible attestation options not available with the default Istio identity management while harnessing Istio s powerful service management For example SPIRE s plugin architecture enables diverse workload attestation options beyond the Kubernetes namespace and service account attestation offered by Istio SPIRE s node attestation extends attestation to the physical or virtual hardware on which workloads run For a quick demo of how this SPIRE integration with Istio works see Integrating SPIRE as a CA through Envoy s SDS API samples security spire Install SPIRE We recommend you follow SPIRE s installation instructions and best practices for installing SPIRE and for deploying SPIRE in production environments For the examples in this guide the SPIRE Helm charts https artifacthub io packages helm spiffe spire will be used with upstream defaults to focus on just the configuration necessary to integrate SPIRE and Istio helm upgrade install n spire server spire crds spire crds repo https spiffe github io helm charts hardened create namespace helm upgrade install n spire server spire spire repo https spiffe github io helm charts hardened wait set global spire trustDomain example org See the SPIRE Helm chart https artifacthub io packages helm spiffe spire documentation for other values you can configure for your installation It is important that SPIRE and Istio are configured with the exact same trust domain to prevent authentication and authorization errors and that the SPIFFE CSI driver https github com spiffe spiffe csi is enabled and installed By default the above will also install The SPIFFE CSI driver https github com spiffe spiffe csi which is used to mount an Envoy compatible SDS socket into proxies Using the SPIFFE CSI driver to mount SDS sockets is strongly recommended by both Istio and SPIRE as hostMounts are a larger security risk and introduce operational hurdles This guide assumes the use of the SPIFFE CSI driver The SPIRE Controller Manager https github com spiffe spire controller manager which eases the creation of SPIFFE registrations for workloads Register workloads By design SPIRE only grants identities to workloads that have been registered with the SPIRE server this includes user workloads as well as Istio components Istio sidecars and gateways once configured for SPIRE integration cannot get identities and therefore cannot reach READY status unless there is a preexisting matching SPIRE registration created for them ahead of time See the SPIRE docs on registering workloads https spiffe io docs latest deploying registering for more information on using multiple selectors to strengthen attestation criteria and the selectors available This section describes the options available for registering Istio workloads in a SPIRE Server and provides some example workload registrations Istio currently requires a specific SPIFFE ID format for workloads All registrations must follow the Istio SPIFFE ID pattern spiffe trust domain ns namespace sa service account Option 1 Auto registration using the SPIRE Controller Manager New entries will be automatically registered for each new pod that matches the selector defined in a ClusterSPIFFEID https github com spiffe spire controller manager blob main docs clusterspiffeid crd md custom resource Both Istio sidecars and Istio gateways need to be registered with SPIRE so that they can request identities Istio Gateway ClusterSPIFFEID The following will create a ClusterSPIFFEID which will auto register any Istio Ingress gateway pod with SPIRE if it is scheduled into the istio system namespace and has a service account named istio ingressgateway service account These selectors are used as a simple example consult the SPIRE Controller Manager documentation https github com spiffe spire controller manager blob main docs clusterspiffeid crd md for more details kubectl apply f EOF apiVersion spire spiffe io v1alpha1 kind ClusterSPIFFEID metadata name istio ingressgateway reg spec spiffeIDTemplate spiffe ns sa workloadSelectorTemplates k8s ns istio system k8s sa istio ingressgateway service account EOF Istio Sidecar ClusterSPIFFEID The following will create a ClusterSPIFFEID which will auto register any pod with the spiffe io spire managed identity true label that is deployed into the default namespace with SPIRE These selectors are used as a simple example consult the SPIRE Controller Manager documentation https github com spiffe spire controller manager blob main docs clusterspiffeid crd md for more details kubectl apply f EOF apiVersion spire spiffe io v1alpha1 kind ClusterSPIFFEID metadata name istio sidecar reg spec spiffeIDTemplate spiffe ns sa podSelector matchLabels spiffe io spire managed identity true workloadSelectorTemplates k8s ns default EOF Option 2 Manual Registration If you wish to manually create your SPIRE registrations rather than use the SPIRE Controller Manager mentioned in the recommended option option 1 auto registration using the spire controller manager refer to the SPIRE documentation on manual registration https spiffe io docs latest deploying registering Below are the equivalent manual registrations based off the automatic registrations in Option 1 option 1 auto registration using the spire controller manager The following steps assume you have already followed the SPIRE documentation to manually register your SPIRE agent and node attestation https spiffe io docs latest deploying registering 1 defining the spiffe id of the agent and that your SPIRE agent was registered with the SPIFFE identity spiffe example org ns spire sa spire agent 1 Get the spire server pod SPIRE SERVER POD kubectl get pod l statefulset kubernetes io pod name spire server 0 n spire server o jsonpath items 0 metadata name 1 Register an entry for the Istio Ingress gateway pod kubectl exec n spire SPIRE SERVER POD opt spire bin spire server entry create spiffeID spiffe example org ns istio system sa istio ingressgateway service account parentID spiffe example org ns spire sa spire agent selector k8s sa istio ingressgateway service account selector k8s ns istio system socketPath run spire sockets server sock Entry ID 6f2fe370 5261 4361 ac36 10aae8d91ff7 SPIFFE ID spiffe example org ns istio system sa istio ingressgateway service account Parent ID spiffe example org ns spire sa spire agent Revision 0 TTL default Selector k8s ns istio system Selector k8s sa istio ingressgateway service account 1 Register an entry for workloads injected with an Istio sidecar kubectl exec n spire SPIRE SERVER POD opt spire bin spire server entry create spiffeID spiffe example org ns default sa curl parentID spiffe example org ns spire sa spire agent selector k8s ns default selector k8s pod label spiffe io spire managed identity true socketPath run spire sockets server sock Install Istio 1 Download the Istio release docs setup additional setup download istio release 1 Create the Istio configuration with custom patches for the Ingress Gateway and istio proxy The Ingress Gateway component includes the spiffe io spire managed identity true label cat EOF istio yaml apiVersion install istio io v1alpha1 kind IstioOperator metadata namespace istio system spec profile default meshConfig trustDomain example org values This is used to customize the sidecar template It adds both the label to indicate that SPIRE should manage the identity of this pod as well as the CSI driver mounts sidecarInjectorWebhook templates spire labels spiffe io spire managed identity true spec containers name istio proxy volumeMounts name workload socket mountPath run secrets workload spiffe uds readOnly true volumes name workload socket csi driver csi spiffe io readOnly true components ingressGateways name istio ingressgateway enabled true label istio ingressgateway k8s overlays This is used to customize the ingress gateway template It adds the CSI driver mounts as well as an init container to stall gateway startup until the CSI driver mounts the socket apiVersion apps v1 kind Deployment name istio ingressgateway patches path spec template spec volumes name workload socket value name workload socket csi driver csi spiffe io readOnly true path spec template spec containers name istio proxy volumeMounts name workload socket value name workload socket mountPath run secrets workload spiffe uds readOnly true path spec template spec initContainers value name wait for spire socket image busybox 1 36 volumeMounts name workload socket mountPath run secrets workload spiffe uds readOnly true env name CHECK FILE value run secrets workload spiffe uds socket command sh c echo date Iseconds Waiting for CHECK FILE while e CHECK FILE do echo date Iseconds File does not exist CHECK FILE sleep 15 done ls l CHECK FILE EOF 1 Apply the configuration istioctl install skip confirmation f istio yaml 1 Check Ingress Gateway pod state kubectl get pods n istio system NAME READY STATUS RESTARTS AGE istio ingressgateway 5b45864fd4 lgrxs 1 1 Running 0 17s istiod 989f54d9c sg7sn 1 1 Running 0 23s The Ingress Gateway pod is Ready since the corresponding registration entry is automatically created for it on the SPIRE Server Envoy is able to fetch cryptographic identities from SPIRE This configuration also adds an initContainer to the gateway that will wait for SPIRE to create the UNIX Domain Socket before starting the istio proxy If the SPIRE agent is not ready or has not been properly configured with the same socket path the Ingress Gateway initContainer will wait forever 1 Deploy an example workload istioctl kube inject filename samples security spire curl spire yaml kubectl apply f In addition to needing spiffe io spire managed identity label the workload will need the SPIFFE CSI Driver volume to access the SPIRE Agent socket To accomplish this you can leverage the spire pod annotation template from the Install Istio install istio section or add the CSI volume to the deployment spec of your workload Both of these alternatives are highlighted on the example snippet below apiVersion apps v1 kind Deployment metadata name curl spec replicas 1 selector matchLabels app curl template metadata labels app curl Injects custom sidecar template annotations inject istio io templates sidecar spire spec terminationGracePeriodSeconds 0 serviceAccountName curl containers name curl image curlimages curl command bin sleep 3650d imagePullPolicy IfNotPresent volumeMounts name tmp mountPath tmp securityContext runAsUser 1000 volumes name tmp emptyDir CSI volume name workload socket csi driver csi spiffe io readOnly true The Istio configuration shares the spiffe csi driver with the Ingress Gateway and the sidecars that are going to be injected on workload pods granting them access to the SPIRE Agent s UNIX Domain Socket See Verifying that identities were created for workloads verifying that identities were created for workloads to check issued identities Verifying that identities were created for workloads Use the following command to confirm that identities were created for the workloads kubectl exec t SPIRE SERVER POD n spire server c spire server bin spire server entry show Found 2 entries Entry ID c8dfccdc 9762 4762 80d3 5434e5388ae7 SPIFFE ID spiffe example org ns istio system sa istio ingressgateway service account Parent ID spiffe example org spire agent k8s psat demo cluster bea19580 ae04 4679 a22e 472e18ca4687 Revision 0 X509 SVID TTL default JWT SVID TTL default Selector k8s pod uid 88b71387 4641 4d9c 9a89 989c88f7509d Entry ID af7b53dc 4cc9 40d3 aaeb 08abbddd8e54 SPIFFE ID spiffe example org ns default sa curl Parent ID spiffe example org spire agent k8s psat demo cluster bea19580 ae04 4679 a22e 472e18ca4687 Revision 0 X509 SVID TTL default JWT SVID TTL default Selector k8s pod uid ee490447 e502 46bd 8532 5a746b0871d6 Check the Ingress gateway pod state kubectl get pods n istio system NAME READY STATUS RESTARTS AGE istio ingressgateway 5b45864fd4 lgrxs 1 1 Running 0 60s istiod 989f54d9c sg7sn 1 1 Running 0 45s After registering an entry for the Ingress gateway pod Envoy receives the identity issued by SPIRE and uses it for all TLS and mTLS communications Check that the workload identity was issued by SPIRE 1 Get pod information CURL POD kubectl get pod l app curl o jsonpath items 0 metadata name 1 Retrieve curl s SVID identity document using the istioctl proxy config secret command istioctl proxy config secret CURL POD o json jq r dynamicActiveSecrets 0 secret tlsCertificate certificateChain inlineBytes base64 decode chain pem 1 Inspect the certificate and verify that SPIRE was the issuer openssl x509 in chain pem text grep SPIRE Subject C US O SPIRE CN curl 5f4d47c948 njvpk SPIFFE federation SPIRE Servers are able to authenticate SPIFFE identities originating from different trust domains This is known as SPIFFE federation SPIRE Agent can be configured to push federated bundles to Envoy through the Envoy SDS API allowing Envoy to use validation context https spiffe io docs latest microservices envoy validation context to verify peer certificates and trust a workload from another trust domain To enable Istio to federate SPIFFE identities through SPIRE integration consult SPIRE Agent SDS configuration https github com spiffe spire blob main doc spire agent md sds configuration and set the following SDS configuration values for your SPIRE Agent configuration file Configuration Description Resource Name default svid name The TLS Certificate resource name to use for the default X509 SVID with Envoy SDS default default bundle name The Validation Context resource name to use for the default X 509 bundle with Envoy SDS null default all bundles name The Validation Context resource name to use for all bundles including federated with Envoy SDS ROOTCA This will allow Envoy to get federated bundles directly from SPIRE Create federated registration entries If using the SPIRE Controller Manager create federated entries for workloads by setting the federatesWith field of the ClusterSPIFFEID CR https github com spiffe spire controller manager blob main docs clusterspiffeid crd md to the trust domains you want the pod to federate with apiVersion spire spiffe io v1alpha1 kind ClusterSPIFFEID metadata name federation spec spiffeIDTemplate spiffe ns sa podSelector matchLabels spiffe io spire managed identity true federatesWith example io example ai For manual registration see Create Registration Entries for Federation https spiffe io docs latest architecture federation readme create registration entries for federation Cleanup SPIRE Remove SPIRE by uninstalling its Helm charts helm delete n spire server spire helm delete n spire server spire crds |
istio test no docs examples advanced gateways ingress certmgr weight 26 title cert manager aliases keywords integration cert manager owner istio wg environments maintainers docs tasks traffic management ingress ingress certmgr Information on how to integrate with cert manager | ---
title: cert-manager
description: Information on how to integrate with cert-manager.
weight: 26
keywords: [integration,cert-manager]
aliases:
- /docs/tasks/traffic-management/ingress/ingress-certmgr/
- /docs/examples/advanced-gateways/ingress-certmgr/
owner: istio/wg-environments-maintainers
test: no
---
[cert-manager](https://cert-manager.io/) is a tool that automates certificate management.
This can be integrated with Istio gateways to manage TLS certificates.
## Configuration
Consult the [cert-manager installation documentation](https://cert-manager.io/docs/installation/kubernetes/)
to get started. No special changes are needed to work with Istio.
## Usage
### Istio Gateway
cert-manager can be used to write a secret to Kubernetes, which can then be referenced by a Gateway.
1. To get started, configure an `Issuer` resource, following the [cert-manager issuer documentation](https://cert-manager.io/docs/configuration/). `Issuer`s are Kubernetes resources that represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. For example: an `Issuer` may look like:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: ca-issuer
namespace: istio-system
spec:
ca:
secretName: ca-key-pair
For a common Issuer type, ACME, a pod and service are created to respond to challenge requests in order to verify the client owns the domain. To respond to those challenges, an endpoint at `http://<YOUR_DOMAIN>/.well-known/acme-challenge/<TOKEN>` will need to be reachable. That configuration may be implementation specific.
1. Next, configure a `Certificate` resource, following the
[cert-manager documentation](https://cert-manager.io/docs/usage/certificate/).
The `Certificate` should be created in the same namespace as the `istio-ingressgateway` deployment.
For example, a `Certificate` may look like:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: ingress-cert
namespace: istio-system
spec:
secretName: ingress-cert
commonName: my.example.com
dnsNames:
- my.example.com
...
1. Once we have the certificate created, we should see the secret created in the `istio-system` namespace.
This can then be referenced in the `tls` config for a Gateway under `credentialName`:
apiVersion: networking.istio.io/v1
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert # This should match the Certificate secretName
hosts:
- my.example.com # This should match a DNS name in the Certificate
### Kubernetes Ingress
cert-manager provides direct integration with Kubernetes Ingress by configuring an
[annotation on the Ingress object](https://cert-manager.io/docs/usage/ingress/).
If this method is used, the Ingress must reside in the same namespace as the
`istio-ingressgateway` deployment, as secrets will only be read within the same namespace.
Alternatively, a `Certificate` can be created as described in [Istio Gateway](#istio-gateway),
then referenced in the `Ingress` object:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: my.example.com
http: ...
tls:
- hosts:
- my.example.com # This should match a DNS name in the Certificate
secretName: ingress-cert # This should match the Certificate secretName
| istio | title cert manager description Information on how to integrate with cert manager weight 26 keywords integration cert manager aliases docs tasks traffic management ingress ingress certmgr docs examples advanced gateways ingress certmgr owner istio wg environments maintainers test no cert manager https cert manager io is a tool that automates certificate management This can be integrated with Istio gateways to manage TLS certificates Configuration Consult the cert manager installation documentation https cert manager io docs installation kubernetes to get started No special changes are needed to work with Istio Usage Istio Gateway cert manager can be used to write a secret to Kubernetes which can then be referenced by a Gateway 1 To get started configure an Issuer resource following the cert manager issuer documentation https cert manager io docs configuration Issuer s are Kubernetes resources that represent certificate authorities CAs that are able to generate signed certificates by honoring certificate signing requests For example an Issuer may look like apiVersion cert manager io v1 kind Issuer metadata name ca issuer namespace istio system spec ca secretName ca key pair For a common Issuer type ACME a pod and service are created to respond to challenge requests in order to verify the client owns the domain To respond to those challenges an endpoint at http YOUR DOMAIN well known acme challenge TOKEN will need to be reachable That configuration may be implementation specific 1 Next configure a Certificate resource following the cert manager documentation https cert manager io docs usage certificate The Certificate should be created in the same namespace as the istio ingressgateway deployment For example a Certificate may look like apiVersion cert manager io v1 kind Certificate metadata name ingress cert namespace istio system spec secretName ingress cert commonName my example com dnsNames my example com 1 Once we have the certificate created we should see the secret created in the istio system namespace This can then be referenced in the tls config for a Gateway under credentialName apiVersion networking istio io v1 kind Gateway metadata name gateway spec selector istio ingressgateway servers port number 443 name https protocol HTTPS tls mode SIMPLE credentialName ingress cert This should match the Certificate secretName hosts my example com This should match a DNS name in the Certificate Kubernetes Ingress cert manager provides direct integration with Kubernetes Ingress by configuring an annotation on the Ingress object https cert manager io docs usage ingress If this method is used the Ingress must reside in the same namespace as the istio ingressgateway deployment as secrets will only be read within the same namespace Alternatively a Certificate can be created as described in Istio Gateway istio gateway then referenced in the Ingress object apiVersion networking k8s io v1 kind Ingress metadata name ingress annotations kubernetes io ingress class istio spec rules host my example com http tls hosts my example com This should match a DNS name in the Certificate secretName ingress cert This should match the Certificate secretName |
istio is an open source monitoring system and time series database keywords integration prometheus weight 30 owner istio wg environments maintainers How to integrate with Prometheus test n a title Prometheus | ---
title: Prometheus
description: How to integrate with Prometheus.
weight: 30
keywords: [integration,prometheus]
owner: istio/wg-environments-maintainers
test: n/a
---
[Prometheus](https://prometheus.io/) is an open source monitoring system and time series database.
You can use Prometheus with Istio to record metrics that track the health of Istio and of
applications within the service mesh. You can visualize metrics using tools like
[Grafana](/docs/ops/integrations/grafana/) and [Kiali](/docs/tasks/observability/kiali/).
## Installation
### Option 1: Quick start
Istio provides a basic sample installation to quickly get Prometheus up and running:
$ kubectl apply -f /samples/addons/prometheus.yaml
This will deploy Prometheus into your cluster. This is intended for demonstration only, and is not tuned for performance or security.
While the quick-start configuration is well-suited for small clusters and monitoring for short time horizons,
it is not suitable for large-scale meshes or monitoring over a period of days or weeks. In particular,
the introduced labels can increase metrics cardinality, requiring a large amount of storage. And, when trying
to identify trends and differences in traffic over time, access to historical data can be paramount.
### Option 2: Customizable install
Consult the [Prometheus documentation](https://www.prometheus.io/) to get started
deploying Prometheus into your environment. See [Configuration](#configuration)
for more information on configuring Prometheus to scrape Istio deployments.
## Configuration
In an Istio mesh, each component exposes an endpoint that emits metrics. Prometheus works
by scraping these endpoints and collecting the results. This is configured through the
[Prometheus configuration file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/)
which controls settings for which endpoints to query, the port and path to query, TLS settings, and more.
To gather metrics for the entire mesh, configure Prometheus to scrape:
1. The control plane (`istiod` deployment)
1. Ingress and Egress gateways
1. The Envoy sidecar
1. The user applications (if they expose Prometheus metrics)
To simplify the configuration of metrics, Istio offers two modes of operation.
### Option 1: Metrics merging
To simplify configuration, Istio has the ability to control scraping entirely by
`prometheus.io` annotations. This allows Istio scraping to work out of the box with
standard configurations such as the ones provided by the
[Helm `stable/prometheus`](https://github.com/helm/charts/tree/master/stable/prometheus) charts.
While `prometheus.io` annotations are not a core part of Prometheus,
they have become the de facto standard to configure scraping.
This option is enabled by default but can be disabled by passing
`--set meshConfig.enablePrometheusMerge=false` during [installation](/docs/setup/install/istioctl/).
When enabled, appropriate `prometheus.io` annotations will be added to all data plane pods to set up scraping.
If these annotations already exist, they will be overwritten. With this option, the Envoy sidecar will
merge Istio's metrics with the application metrics. The merged metrics will be scraped from `:15020/stats/prometheus`.
This option exposes all the metrics in plain text.
This feature may not suit your needs in the following situations:
* You need to scrape metrics using TLS.
* Your application exposes metrics with the same names as Istio metrics. For example,
your application metrics expose an `istio_requests_total` metric.
This might happen if the application is itself running Envoy.
* Your Prometheus deployment is not configured to scrape based on standard `prometheus.io` annotations.
If required, this feature can be disabled per workload by adding a `prometheus.istio.io/merge-metrics: "false"` annotation on a pod.
### Option 2: Customized scraping configurations
To configure an existing Prometheus instance to scrape stats generated by Istio, several jobs need to be added.
* To scrape `Istiod` stats, the following example job can be added to scrape its `http-monitoring` port:
- job_name: 'istiod'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istiod;http-monitoring
* To scrape Envoy stats, including sidecar proxies and gateway proxies,
the following job can be added to scrape ports that end with `-envoy-prom`:
- job_name: 'envoy-stats'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-envoy-prom'
* For application stats, if [Strict mTLS](/docs/tasks/security/authentication/authn-policy/#globally-enabling-istio-mutual-tls-in-strict-mode)
is not enabled, your existing scraping configuration should still work. Otherwise,
Prometheus needs to be configured to [scrape with Istio certs](#tls-settings).
#### TLS settings
The control plane, gateway, and Envoy sidecar metrics will all be scraped over cleartext.
However, the application metrics will follow whatever [Istio authentication policy](/docs/tasks/security/authentication/authn-policy) has been configured
for the workload.
* If you use `STRICT` mode, then Prometheus will need to be configured to scrape using Istio certificates as described below.
* If you use `PERMISSIVE` mode, the workload typically accepts TLS and cleartext. However, Prometheus cannot send the special variant of TLS Istio requires for `PERMISSIVE` mode. As a result, you must *not* configure TLS in Prometheus.
* If you use `DISABLE` mode, no TLS configuration is required for Prometheus.
Note this only applies to Istio-terminated TLS. If your application directly handles TLS:
* `STRICT` mode is not supported, as Prometheus would need to send two layers of TLS which it cannot do.
* `PERMISSIVE` mode and `DISABLE` mode should be configured the same as if Istio was not present.
See [Understanding TLS Configuration](/docs/ops/configuration/traffic-management/tls-configuration/) for more information.
One way to provision Istio certificates for Prometheus is by injecting a sidecar
which will rotate SDS certificates and output them to a volume that can be shared with Prometheus.
However, the sidecar should not intercept requests for Prometheus because Prometheus's
model of direct endpoint access is incompatible with Istio's sidecar proxy model.
To achieve this, configure a cert volume mount on the Prometheus server container:
containers:
- name: prometheus-server
...
volumeMounts:
mountPath: /etc/prom-certs/
name: istio-certs
volumes:
- emptyDir:
medium: Memory
name: istio-certs
Then add the following annotations to the Prometheus deployment pod template,
and deploy it with [sidecar injection](/docs/setup/additional-setup/sidecar-injection/).
This configures the sidecar to write a certificate to the shared volume, but without configuring traffic redirection:
spec:
template:
metadata:
annotations:
traffic.sidecar.istio.io/includeInboundPorts: "" # do not intercept any inbound ports
traffic.sidecar.istio.io/includeOutboundIPRanges: "" # do not intercept any outbound traffic
proxy.istio.io/config: | # configure an env variable `OUTPUT_CERTS` to write certificates to the given folder
proxyMetadata:
OUTPUT_CERTS: /etc/istio-output-certs
sidecar.istio.io/userVolumeMount: '[{"name": "istio-certs", "mountPath": "/etc/istio-output-certs"}]' # mount the shared volume at sidecar proxy
Finally, set the scraping job TLS context as follows:
scheme: https
tls_config:
ca_file: /etc/prom-certs/root-cert.pem
cert_file: /etc/prom-certs/cert-chain.pem
key_file: /etc/prom-certs/key.pem
insecure_skip_verify: true # Prometheus does not support Istio security naming, thus skip verifying target pod certificate
## Best practices
For larger meshes, advanced configuration might help Prometheus scale.
See [Using Prometheus for production-scale monitoring](/docs/ops/best-practices/observability/#using-prometheus-for-production-scale-monitoring)
for more information. | istio | title Prometheus description How to integrate with Prometheus weight 30 keywords integration prometheus owner istio wg environments maintainers test n a Prometheus https prometheus io is an open source monitoring system and time series database You can use Prometheus with Istio to record metrics that track the health of Istio and of applications within the service mesh You can visualize metrics using tools like Grafana docs ops integrations grafana and Kiali docs tasks observability kiali Installation Option 1 Quick start Istio provides a basic sample installation to quickly get Prometheus up and running kubectl apply f samples addons prometheus yaml This will deploy Prometheus into your cluster This is intended for demonstration only and is not tuned for performance or security While the quick start configuration is well suited for small clusters and monitoring for short time horizons it is not suitable for large scale meshes or monitoring over a period of days or weeks In particular the introduced labels can increase metrics cardinality requiring a large amount of storage And when trying to identify trends and differences in traffic over time access to historical data can be paramount Option 2 Customizable install Consult the Prometheus documentation https www prometheus io to get started deploying Prometheus into your environment See Configuration configuration for more information on configuring Prometheus to scrape Istio deployments Configuration In an Istio mesh each component exposes an endpoint that emits metrics Prometheus works by scraping these endpoints and collecting the results This is configured through the Prometheus configuration file https prometheus io docs prometheus latest configuration configuration which controls settings for which endpoints to query the port and path to query TLS settings and more To gather metrics for the entire mesh configure Prometheus to scrape 1 The control plane istiod deployment 1 Ingress and Egress gateways 1 The Envoy sidecar 1 The user applications if they expose Prometheus metrics To simplify the configuration of metrics Istio offers two modes of operation Option 1 Metrics merging To simplify configuration Istio has the ability to control scraping entirely by prometheus io annotations This allows Istio scraping to work out of the box with standard configurations such as the ones provided by the Helm stable prometheus https github com helm charts tree master stable prometheus charts While prometheus io annotations are not a core part of Prometheus they have become the de facto standard to configure scraping This option is enabled by default but can be disabled by passing set meshConfig enablePrometheusMerge false during installation docs setup install istioctl When enabled appropriate prometheus io annotations will be added to all data plane pods to set up scraping If these annotations already exist they will be overwritten With this option the Envoy sidecar will merge Istio s metrics with the application metrics The merged metrics will be scraped from 15020 stats prometheus This option exposes all the metrics in plain text This feature may not suit your needs in the following situations You need to scrape metrics using TLS Your application exposes metrics with the same names as Istio metrics For example your application metrics expose an istio requests total metric This might happen if the application is itself running Envoy Your Prometheus deployment is not configured to scrape based on standard prometheus io annotations If required this feature can be disabled per workload by adding a prometheus istio io merge metrics false annotation on a pod Option 2 Customized scraping configurations To configure an existing Prometheus instance to scrape stats generated by Istio several jobs need to be added To scrape Istiod stats the following example job can be added to scrape its http monitoring port job name istiod kubernetes sd configs role endpoints namespaces names istio system relabel configs source labels meta kubernetes service name meta kubernetes endpoint port name action keep regex istiod http monitoring To scrape Envoy stats including sidecar proxies and gateway proxies the following job can be added to scrape ports that end with envoy prom job name envoy stats metrics path stats prometheus kubernetes sd configs role pod relabel configs source labels meta kubernetes pod container port name action keep regex envoy prom For application stats if Strict mTLS docs tasks security authentication authn policy globally enabling istio mutual tls in strict mode is not enabled your existing scraping configuration should still work Otherwise Prometheus needs to be configured to scrape with Istio certs tls settings TLS settings The control plane gateway and Envoy sidecar metrics will all be scraped over cleartext However the application metrics will follow whatever Istio authentication policy docs tasks security authentication authn policy has been configured for the workload If you use STRICT mode then Prometheus will need to be configured to scrape using Istio certificates as described below If you use PERMISSIVE mode the workload typically accepts TLS and cleartext However Prometheus cannot send the special variant of TLS Istio requires for PERMISSIVE mode As a result you must not configure TLS in Prometheus If you use DISABLE mode no TLS configuration is required for Prometheus Note this only applies to Istio terminated TLS If your application directly handles TLS STRICT mode is not supported as Prometheus would need to send two layers of TLS which it cannot do PERMISSIVE mode and DISABLE mode should be configured the same as if Istio was not present See Understanding TLS Configuration docs ops configuration traffic management tls configuration for more information One way to provision Istio certificates for Prometheus is by injecting a sidecar which will rotate SDS certificates and output them to a volume that can be shared with Prometheus However the sidecar should not intercept requests for Prometheus because Prometheus s model of direct endpoint access is incompatible with Istio s sidecar proxy model To achieve this configure a cert volume mount on the Prometheus server container containers name prometheus server volumeMounts mountPath etc prom certs name istio certs volumes emptyDir medium Memory name istio certs Then add the following annotations to the Prometheus deployment pod template and deploy it with sidecar injection docs setup additional setup sidecar injection This configures the sidecar to write a certificate to the shared volume but without configuring traffic redirection spec template metadata annotations traffic sidecar istio io includeInboundPorts do not intercept any inbound ports traffic sidecar istio io includeOutboundIPRanges do not intercept any outbound traffic proxy istio io config configure an env variable OUTPUT CERTS to write certificates to the given folder proxyMetadata OUTPUT CERTS etc istio output certs sidecar istio io userVolumeMount name istio certs mountPath etc istio output certs mount the shared volume at sidecar proxy Finally set the scraping job TLS context as follows scheme https tls config ca file etc prom certs root cert pem cert file etc prom certs cert chain pem key file etc prom certs key pem insecure skip verify true Prometheus does not support Istio security naming thus skip verifying target pod certificate Best practices For larger meshes advanced configuration might help Prometheus scale See Using Prometheus for production scale monitoring docs ops best practices observability using prometheus for production scale monitoring for more information |
istio virtual machine owner istio wg environments maintainers keywords Describes Istio s high level architecture for virtual machines title Virtual Machine Architecture weight 25 test n a | ---
title: Virtual Machine Architecture
description: Describes Istio's high-level architecture for virtual machines.
weight: 25
keywords:
- virtual-machine
test: n/a
owner: istio/wg-environments-maintainers
---
Before reading this document, be sure to review [Istio's architecture](/docs/ops/deployment/architecture/) and [deployment models](/docs/ops/deployment/deployment-models/).
This page builds on those documents to explain how Istio can be extended to support joining virtual machines into the mesh.
Istio's virtual machine support allows connecting workloads outside of a Kubernetes cluster to the mesh.
This enables legacy applications, or applications not suitable to run in a containerized environment, to get all the benefits that Istio provides to applications running inside Kubernetes.
For workloads running on Kubernetes, the Kubernetes platform itself provides various features like service discovery, DNS resolution, and health checks which are often missing in virtual machine environments.
Istio enables these features for workloads running on virtual machines, and in addition allows these workloads to utilize Istio functionality such as mutual TLS (mTLS), rich telemetry, and advanced traffic management capabilities.
The following diagram shows the architecture of a mesh with virtual machines:
In this mesh, there is a single [network](/docs/ops/deployment/deployment-models/#network-models), where pods and virtual machines can communicate directly with each other.
Control plane traffic, including XDS configuration and certificate signing, are sent through a Gateway in the cluster.
This ensures that the virtual machines have a stable address to connect to when they are bootstrapping. Pods and virtual machines can communicate directly with each other without requiring any intermediate Gateway.
In this mesh, there are multiple [networks](/docs/ops/deployment/deployment-models/#network-models), where pods and virtual machines are not able to communicate directly with each other.
Control plane traffic, including XDS configuration and certificate signing, are sent through a Gateway in the cluster.
Similarly, all communication between pods and virtual machines goes through the gateway, which acts as a bridge between the two networks.
## Service association
Istio provides two mechanisms to represent virtual machine workloads:
* [`WorkloadGroup`](/docs/reference/config/networking/workload-group/) represents a logical group of virtual machine workloads that share common properties. This is similar to a `Deployment` in Kubernetes.
* [`WorkloadEntry`](/docs/reference/config/networking/workload-entry/) represents a single instance of a virtual machine workload. This is similar to a `Pod` in Kubernetes.
Creating these resources (`WorkloadGroup` and `WorkloadEntry`) does not result in provisioning of any resources or running any virtual machine workloads.
Rather, these resources just reference these workloads and inform Istio how to configure the mesh appropriately.
When adding a virtual machine workload to the mesh, you will need to create a `WorkloadGroup` that acts as template for each `WorkloadEntry` instance:
apiVersion: networking.istio.io/v1
kind: WorkloadGroup
metadata:
name: product-vm
spec:
metadata:
labels:
app: product
template:
serviceAccount: default
probe:
httpGet:
port: 8080
Once a virtual machine has been [configured and added to the mesh](/docs/setup/install/virtual-machine/#configure-the-virtual-machine), a corresponding `WorkloadEntry` will be automatically created by the Istio control plane.
For example:
apiVersion: networking.istio.io/v1
kind: WorkloadEntry
metadata:
annotations:
istio.io/autoRegistrationGroup: product-vm
labels:
app: product
name: product-vm-1.2.3.4
spec:
address: 1.2.3.4
labels:
app: product
serviceAccount: default
This `WorkloadEntry` resource describes a single instance of a workload, similar to a pod in Kubernetes. When the workload is removed from the mesh, the `WorkloadEntry` resource will
be automatically removed. Additionally, if any probes are configured in the `WorkloadGroup` resource, the Istio control plane automatically updates the health status of associated `WorkloadEntry` instances.
In order for consumers to reliably call your workload, it's recommended to declare a `Service` association. This allows clients to reach a stable hostname, like `product.default.svc.cluster.local`, rather than an ephemeral IP addresses. This also enables you to use advanced routing capabilities in Istio via the `DestinationRule` and `VirtualService` APIs.
Any Kubernetes service can transparently select workloads across both pods and virtual machines via the selector fields which are matched with pod and `WorkloadEntry` labels respectively.
For example, a `Service` named `product` is composed of a `Pod` and a `WorkloadEntry`:
With this configuration, requests to `product` would be load-balanced across both the pod and virtual machine workload instances.
## DNS
Kubernetes provides DNS resolution in pods for `Service` names allowing pods to easily communicate with one another by stable hostnames.
For virtual machine expansion, Istio provides similar functionality via a [DNS Proxy](/docs/ops/configuration/traffic-management/dns-proxy/).
This feature redirects all DNS queries from the virtual machine workload to the Istio proxy, which maintains a mapping of hostnames to IP addresses.
As a result, workloads running on virtual machines can transparently call `Service`s (similar to pods) without requiring any additional configuration. | istio | title Virtual Machine Architecture description Describes Istio s high level architecture for virtual machines weight 25 keywords virtual machine test n a owner istio wg environments maintainers Before reading this document be sure to review Istio s architecture docs ops deployment architecture and deployment models docs ops deployment deployment models This page builds on those documents to explain how Istio can be extended to support joining virtual machines into the mesh Istio s virtual machine support allows connecting workloads outside of a Kubernetes cluster to the mesh This enables legacy applications or applications not suitable to run in a containerized environment to get all the benefits that Istio provides to applications running inside Kubernetes For workloads running on Kubernetes the Kubernetes platform itself provides various features like service discovery DNS resolution and health checks which are often missing in virtual machine environments Istio enables these features for workloads running on virtual machines and in addition allows these workloads to utilize Istio functionality such as mutual TLS mTLS rich telemetry and advanced traffic management capabilities The following diagram shows the architecture of a mesh with virtual machines In this mesh there is a single network docs ops deployment deployment models network models where pods and virtual machines can communicate directly with each other Control plane traffic including XDS configuration and certificate signing are sent through a Gateway in the cluster This ensures that the virtual machines have a stable address to connect to when they are bootstrapping Pods and virtual machines can communicate directly with each other without requiring any intermediate Gateway In this mesh there are multiple networks docs ops deployment deployment models network models where pods and virtual machines are not able to communicate directly with each other Control plane traffic including XDS configuration and certificate signing are sent through a Gateway in the cluster Similarly all communication between pods and virtual machines goes through the gateway which acts as a bridge between the two networks Service association Istio provides two mechanisms to represent virtual machine workloads WorkloadGroup docs reference config networking workload group represents a logical group of virtual machine workloads that share common properties This is similar to a Deployment in Kubernetes WorkloadEntry docs reference config networking workload entry represents a single instance of a virtual machine workload This is similar to a Pod in Kubernetes Creating these resources WorkloadGroup and WorkloadEntry does not result in provisioning of any resources or running any virtual machine workloads Rather these resources just reference these workloads and inform Istio how to configure the mesh appropriately When adding a virtual machine workload to the mesh you will need to create a WorkloadGroup that acts as template for each WorkloadEntry instance apiVersion networking istio io v1 kind WorkloadGroup metadata name product vm spec metadata labels app product template serviceAccount default probe httpGet port 8080 Once a virtual machine has been configured and added to the mesh docs setup install virtual machine configure the virtual machine a corresponding WorkloadEntry will be automatically created by the Istio control plane For example apiVersion networking istio io v1 kind WorkloadEntry metadata annotations istio io autoRegistrationGroup product vm labels app product name product vm 1 2 3 4 spec address 1 2 3 4 labels app product serviceAccount default This WorkloadEntry resource describes a single instance of a workload similar to a pod in Kubernetes When the workload is removed from the mesh the WorkloadEntry resource will be automatically removed Additionally if any probes are configured in the WorkloadGroup resource the Istio control plane automatically updates the health status of associated WorkloadEntry instances In order for consumers to reliably call your workload it s recommended to declare a Service association This allows clients to reach a stable hostname like product default svc cluster local rather than an ephemeral IP addresses This also enables you to use advanced routing capabilities in Istio via the DestinationRule and VirtualService APIs Any Kubernetes service can transparently select workloads across both pods and virtual machines via the selector fields which are matched with pod and WorkloadEntry labels respectively For example a Service named product is composed of a Pod and a WorkloadEntry With this configuration requests to product would be load balanced across both the pod and virtual machine workload instances DNS Kubernetes provides DNS resolution in pods for Service names allowing pods to easily communicate with one another by stable hostnames For virtual machine expansion Istio provides similar functionality via a DNS Proxy docs ops configuration traffic management dns proxy This feature redirects all DNS queries from the virtual machine workload to the Istio proxy which maintains a mapping of hostnames to IP addresses As a result workloads running on virtual machines can transparently call Service s similar to pods without requiring any additional configuration |
istio deployment models pods kubernetes weight 40 sidecar title Application Requirements keywords sidecar injection Requirements of applications deployed in an Istio enabled cluster | ---
title: Application Requirements
description: Requirements of applications deployed in an Istio-enabled cluster.
weight: 40
keywords:
- kubernetes
- sidecar
- sidecar-injection
- deployment-models
- pods
- setup
aliases:
- /docs/setup/kubernetes/spec-requirements/
- /docs/setup/kubernetes/prepare/spec-requirements/
- /docs/setup/kubernetes/prepare/requirements/
- /docs/setup/kubernetes/additional-setup/requirements/
- /docs/setup/additional-setup/requirements
- /docs/ops/setup/required-pod-capabilities
- /help/ops/setup/required-pod-capabilities
- /docs/ops/prep/requirements
- /docs/ops/deployment/requirements
owner: istio/wg-environments-maintainers
test: n/a
---
Istio provides a great deal of functionality to applications with little or no impact on the application code itself.
Many Kubernetes applications can be deployed in an Istio-enabled cluster without any changes at all.
However, there are some implications of Istio's sidecar model that may need special consideration when deploying
an Istio-enabled application.
This document describes these application considerations and specific requirements of Istio enablement.
## Pod requirements
To be part of a mesh, Kubernetes pods must satisfy the following requirements:
- **Application UIDs**: Ensure your pods do **not** run applications as a user
with the user ID (UID) value of `1337` because `1337` is reserved for the sidecar proxy.
- **`NET_ADMIN` and `NET_RAW` capabilities**: If [pod security policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/)
are [enforced](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#enabling-pod-security-policies)
in your cluster and unless you use the [Istio CNI Plugin](/docs/setup/additional-setup/cni/), your pods must have the
`NET_ADMIN` and `NET_RAW` capabilities allowed. The initialization containers of the Envoy
proxies require these capabilities.
To check if the `NET_ADMIN` and `NET_RAW` capabilities are allowed for your pods, you need to check if their
[service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
can use a pod security policy that allows the `NET_ADMIN` and `NET_RAW` capabilities.
If you haven't specified a service account in your pods' deployment, the pods run using
the `default` service account in their deployment's namespace.
To list the capabilities for a service account, replace `<your namespace>` and `<your service account>`
with your values in the following command:
$ for psp in $(kubectl get psp -o jsonpath="{range .items[*]}{@.metadata.name}{'\n'}{end}"); do if [ $(kubectl auth can-i use psp/$psp --as=system:serviceaccount:<your namespace>:<your service account>) = yes ]; then kubectl get psp/$psp --no-headers -o=custom-columns=NAME:.metadata.name,CAPS:.spec.allowedCapabilities; fi; done
For example, to check for the `default` service account in the `default` namespace, run the following command:
$ for psp in $(kubectl get psp -o jsonpath="{range .items[*]}{@.metadata.name}{'\n'}{end}"); do if [ $(kubectl auth can-i use psp/$psp --as=system:serviceaccount:default:default) = yes ]; then kubectl get psp/$psp --no-headers -o=custom-columns=NAME:.metadata.name,CAPS:.spec.allowedCapabilities; fi; done
If you see `NET_ADMIN` and `NET_RAW` or `*` in the list of capabilities of one of the allowed
policies for your service account, your pods have permission to run the Istio init containers.
Otherwise, you will need to [provide the permission](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#authorizing-policies).
- **Pod labels**: We recommend explicitly declaring pods with an application identifier and version by using a pod label.
These labels add contextual information to the metrics and telemetry that Istio collects.
Each of these values are read from multiple labels ordered from highest to lowest precedence:
- Application name: `service.istio.io/canonical-name`, `app.kubernetes.io/name`, or `app`.
- Application version: `service.istio.io/canonical-revision`, `app.kubernetes.io/version`, or `version`.
- **Named service ports**: Service ports may optionally be named to explicitly specify a protocol.
See [Protocol Selection](/docs/ops/configuration/traffic-management/protocol-selection/) for
more details. If a pod belongs to multiple [Kubernetes services](https://kubernetes.io/docs/concepts/services-networking/service/),
the services cannot use the same port number for different protocols, for
instance HTTP and TCP.
## Ports used by Istio
The following ports and protocols are used by the Istio sidecar proxy (Envoy).
To avoid port conflicts with sidecars, applications should not use any of the ports used by Envoy.
| Port | Protocol | Description | Pod-internal only |
|----|----|----|----|
| 15000 | TCP | Envoy admin port (commands/diagnostics) | Yes |
| 15001 | TCP | Envoy outbound | No |
| 15004 | HTTP | Debug port | Yes |
| 15006 | TCP | Envoy inbound | No |
| 15008 | HTTP2 | HBONE mTLS tunnel port | No |
| 15020 | HTTP | Merged Prometheus telemetry from Istio agent, Envoy, and application | No |
| 15021 | HTTP | Health checks | No |
| 15053 | DNS | DNS port, if capture is enabled | Yes |
| 15090 | HTTP | Envoy Prometheus telemetry | No |
The following ports and protocols are used by the Istio control plane (istiod).
| Port | Protocol | Description | Local host only |
|----|----|----|----|
| 443 | HTTPS | Webhooks service port | No |
| 8080 | HTTP | Debug interface (deprecated, container port only) | No |
| 15010 | GRPC | XDS and CA services (Plaintext, only for secure networks) | No |
| 15012 | GRPC | XDS and CA services (TLS and mTLS, recommended for production use) | No |
| 15014 | HTTP | Control plane monitoring | No |
| 15017 | HTTPS | Webhook container port, forwarded from 443 | No |
## Server First Protocols
Some protocols are "Server First" protocols, which means the server will send the first bytes. This may have an impact on
[`PERMISSIVE`](/docs/reference/config/security/peer_authentication/#PeerAuthentication-MutualTLS-Mode) mTLS and [Automatic protocol selection](/docs/ops/configuration/traffic-management/protocol-selection/#automatic-protocol-selection).
Both of these features work by inspecting the initial bytes of a connection to determine the protocol, which is incompatible with server first protocols.
In order to support these cases, follow the [Explicit protocol selection](/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection) steps to declare the protocol of the application as `TCP`.
The following ports are known to commonly carry server first protocols, and are automatically assumed to be `TCP`:
|Protocol|Port|
|--------|----|
| SMTP |25 |
| DNS |53 |
| MySQL |3306|
| MongoDB|27017|
Because TLS communication is not server first, TLS encrypted server first traffic will work with automatic protocol detection as long as you make sure that all traffic subjected to TLS sniffing is encrypted:
1. Configure `mTLS` mode `STRICT` for the server. This will enforce TLS encryption for all requests.
1. Configure `mTLS` mode `DISABLE` for the server. This will disable the TLS sniffing, allowing server first protocols to be used.
1. Configure all clients to send `TLS` traffic, generally through a [`DestinationRule`](/docs/reference/config/networking/destination-rule/#ClientTLSSettings) or by relying on auto mTLS.
1. Configure your application to send TLS traffic directly.
## Outbound traffic
In order to support Istio's traffic routing capabilities, traffic leaving a pod may be routed differently than
when a sidecar is not deployed.
For HTTP-based traffic, traffic is routed based on the `Host` header. This may lead to unexpected behavior if the destination IP
and `Host` header are not aligned. For example, a request like `curl 1.2.3.4 -H "Host: httpbin.default"` will be routed to the `httpbin` service,
rather than `1.2.3.4`.
For Non HTTP-based traffic (including HTTPS), Istio does not have access to an `Host` header, so routing decisions are based on the Service IP address.
One implication of this is that direct calls to pods (for example, `curl <POD_IP>`), rather than Services, will not be matched. While the traffic may
be [passed through](/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services), it will not get the full Istio functionality
including mTLS encryption, traffic routing, and telemetry.
See the [Traffic Routing](/docs/ops/configuration/traffic-management/traffic-routing) page for more information. | istio | title Application Requirements description Requirements of applications deployed in an Istio enabled cluster weight 40 keywords kubernetes sidecar sidecar injection deployment models pods setup aliases docs setup kubernetes spec requirements docs setup kubernetes prepare spec requirements docs setup kubernetes prepare requirements docs setup kubernetes additional setup requirements docs setup additional setup requirements docs ops setup required pod capabilities help ops setup required pod capabilities docs ops prep requirements docs ops deployment requirements owner istio wg environments maintainers test n a Istio provides a great deal of functionality to applications with little or no impact on the application code itself Many Kubernetes applications can be deployed in an Istio enabled cluster without any changes at all However there are some implications of Istio s sidecar model that may need special consideration when deploying an Istio enabled application This document describes these application considerations and specific requirements of Istio enablement Pod requirements To be part of a mesh Kubernetes pods must satisfy the following requirements Application UIDs Ensure your pods do not run applications as a user with the user ID UID value of 1337 because 1337 is reserved for the sidecar proxy NET ADMIN and NET RAW capabilities If pod security policies https kubernetes io docs concepts policy pod security policy are enforced https kubernetes io docs concepts policy pod security policy enabling pod security policies in your cluster and unless you use the Istio CNI Plugin docs setup additional setup cni your pods must have the NET ADMIN and NET RAW capabilities allowed The initialization containers of the Envoy proxies require these capabilities To check if the NET ADMIN and NET RAW capabilities are allowed for your pods you need to check if their service account https kubernetes io docs tasks configure pod container configure service account can use a pod security policy that allows the NET ADMIN and NET RAW capabilities If you haven t specified a service account in your pods deployment the pods run using the default service account in their deployment s namespace To list the capabilities for a service account replace your namespace and your service account with your values in the following command for psp in kubectl get psp o jsonpath range items metadata name n end do if kubectl auth can i use psp psp as system serviceaccount your namespace your service account yes then kubectl get psp psp no headers o custom columns NAME metadata name CAPS spec allowedCapabilities fi done For example to check for the default service account in the default namespace run the following command for psp in kubectl get psp o jsonpath range items metadata name n end do if kubectl auth can i use psp psp as system serviceaccount default default yes then kubectl get psp psp no headers o custom columns NAME metadata name CAPS spec allowedCapabilities fi done If you see NET ADMIN and NET RAW or in the list of capabilities of one of the allowed policies for your service account your pods have permission to run the Istio init containers Otherwise you will need to provide the permission https kubernetes io docs concepts policy pod security policy authorizing policies Pod labels We recommend explicitly declaring pods with an application identifier and version by using a pod label These labels add contextual information to the metrics and telemetry that Istio collects Each of these values are read from multiple labels ordered from highest to lowest precedence Application name service istio io canonical name app kubernetes io name or app Application version service istio io canonical revision app kubernetes io version or version Named service ports Service ports may optionally be named to explicitly specify a protocol See Protocol Selection docs ops configuration traffic management protocol selection for more details If a pod belongs to multiple Kubernetes services https kubernetes io docs concepts services networking service the services cannot use the same port number for different protocols for instance HTTP and TCP Ports used by Istio The following ports and protocols are used by the Istio sidecar proxy Envoy To avoid port conflicts with sidecars applications should not use any of the ports used by Envoy Port Protocol Description Pod internal only 15000 TCP Envoy admin port commands diagnostics Yes 15001 TCP Envoy outbound No 15004 HTTP Debug port Yes 15006 TCP Envoy inbound No 15008 HTTP2 HBONE mTLS tunnel port No 15020 HTTP Merged Prometheus telemetry from Istio agent Envoy and application No 15021 HTTP Health checks No 15053 DNS DNS port if capture is enabled Yes 15090 HTTP Envoy Prometheus telemetry No The following ports and protocols are used by the Istio control plane istiod Port Protocol Description Local host only 443 HTTPS Webhooks service port No 8080 HTTP Debug interface deprecated container port only No 15010 GRPC XDS and CA services Plaintext only for secure networks No 15012 GRPC XDS and CA services TLS and mTLS recommended for production use No 15014 HTTP Control plane monitoring No 15017 HTTPS Webhook container port forwarded from 443 No Server First Protocols Some protocols are Server First protocols which means the server will send the first bytes This may have an impact on PERMISSIVE docs reference config security peer authentication PeerAuthentication MutualTLS Mode mTLS and Automatic protocol selection docs ops configuration traffic management protocol selection automatic protocol selection Both of these features work by inspecting the initial bytes of a connection to determine the protocol which is incompatible with server first protocols In order to support these cases follow the Explicit protocol selection docs ops configuration traffic management protocol selection explicit protocol selection steps to declare the protocol of the application as TCP The following ports are known to commonly carry server first protocols and are automatically assumed to be TCP Protocol Port SMTP 25 DNS 53 MySQL 3306 MongoDB 27017 Because TLS communication is not server first TLS encrypted server first traffic will work with automatic protocol detection as long as you make sure that all traffic subjected to TLS sniffing is encrypted 1 Configure mTLS mode STRICT for the server This will enforce TLS encryption for all requests 1 Configure mTLS mode DISABLE for the server This will disable the TLS sniffing allowing server first protocols to be used 1 Configure all clients to send TLS traffic generally through a DestinationRule docs reference config networking destination rule ClientTLSSettings or by relying on auto mTLS 1 Configure your application to send TLS traffic directly Outbound traffic In order to support Istio s traffic routing capabilities traffic leaving a pod may be routed differently than when a sidecar is not deployed For HTTP based traffic traffic is routed based on the Host header This may lead to unexpected behavior if the destination IP and Host header are not aligned For example a request like curl 1 2 3 4 H Host httpbin default will be routed to the httpbin service rather than 1 2 3 4 For Non HTTP based traffic including HTTPS Istio does not have access to an Host header so routing decisions are based on the Service IP address One implication of this is that direct calls to pods for example curl POD IP rather than Services will not be matched While the traffic may be passed through docs tasks traffic management egress egress control envoy passthrough to external services it will not get the full Istio functionality including mTLS encryption traffic routing and telemetry See the Traffic Routing docs ops configuration traffic management traffic routing page for more information |
istio docs ops architecture aliases weight 10 title Architecture owner istio wg environments maintainers Describes Istio s high level architecture and design goals docs concepts architecture test n a | ---
title: Architecture
description: Describes Istio's high-level architecture and design goals.
weight: 10
aliases:
- /docs/concepts/architecture
- /docs/ops/architecture
owner: istio/wg-environments-maintainers
test: n/a
---
An Istio service mesh is logically split into a **data plane** and a **control
plane**.
* The **data plane** is composed of a set of intelligent proxies
([Envoy](https://www.envoyproxy.io/)) deployed as sidecars. These proxies
mediate and control all network communication between microservices. They
also collect and report telemetry on all mesh traffic.
* The **control plane** manages and configures the proxies to route traffic.
The following diagram shows the different components that make up each plane:
## Components
The following sections provide a brief overview of each of Istio's core components.
### Envoy
Istio uses an extended version of the
[Envoy](https://www.envoyproxy.io/) proxy. Envoy is a high-performance
proxy developed in C++ to mediate all inbound and outbound traffic for all
services in the service mesh.
Envoy proxies are the only Istio components that interact with data plane
traffic.
Envoy proxies are deployed as sidecars to services, logically
augmenting the services with Envoy’s many built-in features,
for example:
* Dynamic service discovery
* Load balancing
* TLS termination
* HTTP/2 and gRPC proxies
* Circuit breakers
* Health checks
* Staged rollouts with %-based traffic split
* Fault injection
* Rich metrics
This sidecar deployment allows Istio to enforce policy decisions and extract
rich telemetry which can be sent to monitoring systems to provide information
about the behavior of the entire mesh.
The sidecar proxy model also allows you to add Istio capabilities to an
existing deployment without requiring you to rearchitect or rewrite code.
Some of the Istio features and tasks enabled by Envoy proxies include:
* Traffic control features: enforce fine-grained traffic control with rich
routing rules for HTTP, gRPC, WebSocket, and TCP traffic.
* Network resiliency features: setup retries, failovers, circuit breakers, and
fault injection.
* Security and authentication features: enforce security policies and enforce
access control and rate limiting defined through the configuration API.
* Pluggable extensions model based on WebAssembly that allows for custom policy
enforcement and telemetry generation for mesh traffic.
### Istiod
Istiod provides service discovery, configuration and certificate management.
Istiod converts high level routing rules that control traffic behavior into
Envoy-specific configurations, and propagates them to the sidecars at runtime.
It abstracts platform-specific service discovery mechanisms and synthesizes
them into a standard format that any sidecar conforming with the
[Envoy API](https://www.envoyproxy.io/docs/envoy/latest/api/api) can consume.
Istio can support discovery for multiple environments such as Kubernetes or VMs.
You can use Istio's
[Traffic Management API](/docs/concepts/traffic-management/#introducing-istio-traffic-management)
to instruct Istiod to refine the Envoy configuration to exercise more granular control
over the traffic in your service mesh.
Istiod [security](/docs/concepts/security/) enables strong service-to-service and
end-user authentication with built-in identity and credential management. You
can use Istio to upgrade unencrypted traffic in the service mesh. Using
Istio, operators can enforce policies based on service identity rather than
on relatively unstable layer 3 or layer 4 network identifiers.
Additionally, you can use [Istio's authorization feature](/docs/concepts/security/#authorization)
to control who can access your services.
Istiod acts as a Certificate Authority (CA) and generates certificates to allow
secure mTLS communication in the data plane. | istio | title Architecture description Describes Istio s high level architecture and design goals weight 10 aliases docs concepts architecture docs ops architecture owner istio wg environments maintainers test n a An Istio service mesh is logically split into a data plane and a control plane The data plane is composed of a set of intelligent proxies Envoy https www envoyproxy io deployed as sidecars These proxies mediate and control all network communication between microservices They also collect and report telemetry on all mesh traffic The control plane manages and configures the proxies to route traffic The following diagram shows the different components that make up each plane Components The following sections provide a brief overview of each of Istio s core components Envoy Istio uses an extended version of the Envoy https www envoyproxy io proxy Envoy is a high performance proxy developed in C to mediate all inbound and outbound traffic for all services in the service mesh Envoy proxies are the only Istio components that interact with data plane traffic Envoy proxies are deployed as sidecars to services logically augmenting the services with Envoy s many built in features for example Dynamic service discovery Load balancing TLS termination HTTP 2 and gRPC proxies Circuit breakers Health checks Staged rollouts with based traffic split Fault injection Rich metrics This sidecar deployment allows Istio to enforce policy decisions and extract rich telemetry which can be sent to monitoring systems to provide information about the behavior of the entire mesh The sidecar proxy model also allows you to add Istio capabilities to an existing deployment without requiring you to rearchitect or rewrite code Some of the Istio features and tasks enabled by Envoy proxies include Traffic control features enforce fine grained traffic control with rich routing rules for HTTP gRPC WebSocket and TCP traffic Network resiliency features setup retries failovers circuit breakers and fault injection Security and authentication features enforce security policies and enforce access control and rate limiting defined through the configuration API Pluggable extensions model based on WebAssembly that allows for custom policy enforcement and telemetry generation for mesh traffic Istiod Istiod provides service discovery configuration and certificate management Istiod converts high level routing rules that control traffic behavior into Envoy specific configurations and propagates them to the sidecars at runtime It abstracts platform specific service discovery mechanisms and synthesizes them into a standard format that any sidecar conforming with the Envoy API https www envoyproxy io docs envoy latest api api can consume Istio can support discovery for multiple environments such as Kubernetes or VMs You can use Istio s Traffic Management API docs concepts traffic management introducing istio traffic management to instruct Istiod to refine the Envoy configuration to exercise more granular control over the traffic in your service mesh Istiod security docs concepts security enables strong service to service and end user authentication with built in identity and credential management You can use Istio to upgrade unencrypted traffic in the service mesh Using Istio operators can enforce policies based on service identity rather than on relatively unstable layer 3 or layer 4 network identifiers Additionally you can use Istio s authorization feature docs concepts security authorization to control who can access your services Istiod acts as a Certificate Authority CA and generates certificates to allow secure mTLS communication in the data plane |
istio Istio performance and scalability summary performance title Performance and Scalability benchmarks weight 30 aliases scalability keywords scale | ---
title: Performance and Scalability
description: Istio performance and scalability summary.
weight: 30
keywords:
- performance
- scalability
- scale
- benchmarks
aliases:
- /docs/performance-and-scalability/overview
- /docs/performance-and-scalability/microbenchmarks
- /docs/performance-and-scalability/performance-testing-automation
- /docs/performance-and-scalability/realistic-app-benchmark
- /docs/performance-and-scalability/scalability
- /docs/performance-and-scalability/scenarios
- /docs/performance-and-scalability/synthetic-benchmarks
- /docs/concepts/performance-and-scalability
- /docs/ops/performance-and-scalability
owner: istio/wg-environments-maintainers
test: n/a
---
Istio makes it easy to create a network of deployed services with rich routing,
load balancing, service-to-service authentication, monitoring, and more - all
without any changes to the application code. Istio strives to provide
these benefits with minimal resource overhead and aims to support very
large meshes with high request rates while adding minimal latency.
The Istio data plane components, the Envoy proxies, handle data flowing through
the system. The Istio control plane component, Istiod, configures
the data plane. The data plane and control plane have distinct performance concerns.
## Performance summary for Istio 1.24
The [Istio load tests](https://github.com/istio/tools/tree//perf/load) mesh consists
of **1000** services and **2000** pods in an Istio mesh with 70,000 mesh-wide requests per second.
## Control plane performance
Istiod configures sidecar proxies based on user authored configuration files and the current
state of the system. In a Kubernetes environment, Custom Resource Definitions (CRDs) and deployments
constitute the configuration and state of the system. The Istio configuration objects like gateways and virtual
services, provide the user-authored configuration.
To produce the configuration for the proxies, Istiod processes the combined configuration and system state
from the Kubernetes environment and the user-authored configuration.
The control plane supports thousands of services, spread across thousands of pods with a
similar number of user authored virtual services and other configuration objects.
Istiod's CPU and memory requirements scale with the amount of configurations and possible system states.
The CPU consumption scales with the following factors:
- The rate of deployment changes.
- The rate of configuration changes.
- The number of proxies connecting to Istiod.
However, this part is inherently horizontally scalable.
You can increase the number of Istiod instances to reduce the amount of time it takes for the configuration
to reach all proxies.
At large scale, [configuration scoping](/docs/ops/configuration/mesh/configuration-scoping) is highly recommended.
## Data plane performance
Data plane performance depends on many factors, for example:
- Number of client connections
- Target request rate
- Request size and Response size
- Number of proxy worker threads
- Protocol
- CPU cores
- Various proxy features. In particular, telemetry filters (logging, tracing, and metrics) are known to have a moderate impact.
The latency, throughput, and the proxies' CPU and memory consumption are measured as a function of said factors.
### Sidecar and ztunnel resource usage
Since the sidecar proxy performs additional work on the data path, it consumes CPU
and memory. In Istio 1.24, with 1000 http requests per second containing 1 KB of payload each
- a single sidecar proxy with 2 worker threads consumes about 0.20 vCPU and 60 MB of memory.
- a single waypoint proxy with 2 worker threads consumes about 0.25 vCPU and 60 MB of memory
- a single ztunnel proxy consumes about 0.06 vCPU and 12 MB of memory.
The memory consumption of the proxy depends on the total configuration state the proxy holds.
A large number of listeners, clusters, and routes can increase memory usage.
### Latency
Since Istio adds a sidecar proxy or ztunnel proxy on the data path, latency is an important
consideration.
Every feature Istio adds also adds to the path length inside the proxy and potentially affects latency.
The Envoy proxy collects raw telemetry data after a response is sent to the
client.
The time spent collecting raw telemetry for a request does not contribute to the total time taken to complete that request.
However, since the worker is busy handling the request, the worker won't start handling the next request immediately.
This process adds to the queue wait time of the next request and affects average and tail latencies.
The actual tail latency depends on the traffic pattern.
### Latency for Istio 1.24
In sidecar mode, a request will pass through the client sidecar proxy and then the server sidecar proxy before reaching the server, and vice versa.
In ambient mode, a request will pass through the client node ztunnel and then the server node ztunnel before reaching the server.
With waypoints configured, a request will go through a waypoint proxy between the ztunnels.
The following charts show the P90 and P99 latency of http/1.1 requests traveling through various dataplane modes.
To run the tests, we used a bare-metal cluster of 5 [M3 Large](https://deploy.equinix.com/product/servers/m3-large/) machines and [Flannel](https://github.com/flannel-io/flannel) as the primary CNI.
We obtained these results using the [Istio benchmarks](https://github.com/istio/tools/tree//perf/benchmark) for the `http/1.1` protocol with a 1 KB payload at 500, 750, 1000, 1250, and 1500 requests per second using 4 client connections, 2 proxy workers and mutual TLS enabled.
Note: This testing was performed on the [CNCF Community Infrastructure Lab](https://github.com/cncf/cluster).
Different hardware will give different values.
<img width="90%" style="display: block; margin: auto;"
src="istio-1.24.0-fortio-90.png"
alt="P90 latency vs client connections"
caption="P90 latency vs client connections"
/>
<br><br>
<img width="90%" style="display: block; margin: auto;"
src="istio-1.24.0-fortio-99.png"
alt="P99 latency vs client connections"
caption="P99 latency vs client connections"
/>
<br>
- `no mesh`: Client pod directly calls the server pod, no pods in Istio service mesh.
- `ambient: L4`: Default ambient mode with the secure L4 overlay
- `ambient: L4+L7` Default ambient mode with the secure L4 overlay and waypoints enabled for the namespace.
- `sidecar` Client and server sidecars.
### Benchmarking tools
Istio uses the following tools for benchmarking
- [`fortio.org`](https://fortio.org/) - a constant throughput load testing tool.
- [`nighthawk`](https://github.com/envoyproxy/nighthawk) - a load testing tool based on Envoy.
- [`isotope`](https://github.com/istio/tools/tree//isotope) - a synthetic application with configurable topology. | istio | title Performance and Scalability description Istio performance and scalability summary weight 30 keywords performance scalability scale benchmarks aliases docs performance and scalability overview docs performance and scalability microbenchmarks docs performance and scalability performance testing automation docs performance and scalability realistic app benchmark docs performance and scalability scalability docs performance and scalability scenarios docs performance and scalability synthetic benchmarks docs concepts performance and scalability docs ops performance and scalability owner istio wg environments maintainers test n a Istio makes it easy to create a network of deployed services with rich routing load balancing service to service authentication monitoring and more all without any changes to the application code Istio strives to provide these benefits with minimal resource overhead and aims to support very large meshes with high request rates while adding minimal latency The Istio data plane components the Envoy proxies handle data flowing through the system The Istio control plane component Istiod configures the data plane The data plane and control plane have distinct performance concerns Performance summary for Istio 1 24 The Istio load tests https github com istio tools tree perf load mesh consists of 1000 services and 2000 pods in an Istio mesh with 70 000 mesh wide requests per second Control plane performance Istiod configures sidecar proxies based on user authored configuration files and the current state of the system In a Kubernetes environment Custom Resource Definitions CRDs and deployments constitute the configuration and state of the system The Istio configuration objects like gateways and virtual services provide the user authored configuration To produce the configuration for the proxies Istiod processes the combined configuration and system state from the Kubernetes environment and the user authored configuration The control plane supports thousands of services spread across thousands of pods with a similar number of user authored virtual services and other configuration objects Istiod s CPU and memory requirements scale with the amount of configurations and possible system states The CPU consumption scales with the following factors The rate of deployment changes The rate of configuration changes The number of proxies connecting to Istiod However this part is inherently horizontally scalable You can increase the number of Istiod instances to reduce the amount of time it takes for the configuration to reach all proxies At large scale configuration scoping docs ops configuration mesh configuration scoping is highly recommended Data plane performance Data plane performance depends on many factors for example Number of client connections Target request rate Request size and Response size Number of proxy worker threads Protocol CPU cores Various proxy features In particular telemetry filters logging tracing and metrics are known to have a moderate impact The latency throughput and the proxies CPU and memory consumption are measured as a function of said factors Sidecar and ztunnel resource usage Since the sidecar proxy performs additional work on the data path it consumes CPU and memory In Istio 1 24 with 1000 http requests per second containing 1 KB of payload each a single sidecar proxy with 2 worker threads consumes about 0 20 vCPU and 60 MB of memory a single waypoint proxy with 2 worker threads consumes about 0 25 vCPU and 60 MB of memory a single ztunnel proxy consumes about 0 06 vCPU and 12 MB of memory The memory consumption of the proxy depends on the total configuration state the proxy holds A large number of listeners clusters and routes can increase memory usage Latency Since Istio adds a sidecar proxy or ztunnel proxy on the data path latency is an important consideration Every feature Istio adds also adds to the path length inside the proxy and potentially affects latency The Envoy proxy collects raw telemetry data after a response is sent to the client The time spent collecting raw telemetry for a request does not contribute to the total time taken to complete that request However since the worker is busy handling the request the worker won t start handling the next request immediately This process adds to the queue wait time of the next request and affects average and tail latencies The actual tail latency depends on the traffic pattern Latency for Istio 1 24 In sidecar mode a request will pass through the client sidecar proxy and then the server sidecar proxy before reaching the server and vice versa In ambient mode a request will pass through the client node ztunnel and then the server node ztunnel before reaching the server With waypoints configured a request will go through a waypoint proxy between the ztunnels The following charts show the P90 and P99 latency of http 1 1 requests traveling through various dataplane modes To run the tests we used a bare metal cluster of 5 M3 Large https deploy equinix com product servers m3 large machines and Flannel https github com flannel io flannel as the primary CNI We obtained these results using the Istio benchmarks https github com istio tools tree perf benchmark for the http 1 1 protocol with a 1 KB payload at 500 750 1000 1250 and 1500 requests per second using 4 client connections 2 proxy workers and mutual TLS enabled Note This testing was performed on the CNCF Community Infrastructure Lab https github com cncf cluster Different hardware will give different values img width 90 style display block margin auto src istio 1 24 0 fortio 90 png alt P90 latency vs client connections caption P90 latency vs client connections br br img width 90 style display block margin auto src istio 1 24 0 fortio 99 png alt P99 latency vs client connections caption P99 latency vs client connections br no mesh Client pod directly calls the server pod no pods in Istio service mesh ambient L4 Default ambient mode with the secure L4 overlay ambient L4 L7 Default ambient mode with the secure L4 overlay and waypoints enabled for the namespace sidecar Client and server sidecars Benchmarking tools Istio uses the following tools for benchmarking fortio org https fortio org a constant throughput load testing tool nighthawk https github com envoyproxy nighthawk a load testing tool based on Envoy isotope https github com istio tools tree isotope a synthetic application with configurable topology |
istio weight 20 networks multiple clusters tenancy title Deployment Models keywords Describes the options and considerations when configuring your Istio deployment single cluster control plane | ---
title: Deployment Models
description: Describes the options and considerations when configuring your Istio deployment.
weight: 20
keywords:
- single-cluster
- multiple-clusters
- control-plane
- tenancy
- networks
- identity
- trust
- single-mesh
- multiple-meshes
aliases:
- /docs/concepts/multicluster-deployments
- /docs/concepts/deployment-models
- /docs/ops/prep/deployment-models
owner: istio/wg-environments-maintainers
test: n/a
---
When configuring a production deployment of Istio, you need to answer a number of questions.
Will the mesh be confined to a single cluster or distributed across
multiple clusters? Will all the services be located in a single fully connected network, or will
gateways be required to connect services across multiple networks? Is there a single
control plane, potentially shared across clusters,
or are there multiple control planes deployed to ensure high availability (HA)?
Are all clusters going to be connected into a single multicluster
service mesh or will they be federated into a multi-mesh deployment?
All of these questions, among others, represent independent dimensions of configuration for an Istio deployment.
1. single or multiple cluster
1. single or multiple network
1. single or multiple control plane
1. single or multiple mesh
In a production environment involving multiple clusters, you can use a mix
of deployment models. For example, having more than one control plane is recommended for HA,
but you could achieve this for a 3 cluster deployment by deploying 2 clusters with
a single shared control plane and then adding the third cluster with a second
control plane in a different network. All three clusters could then be configured
to share both control planes so that all the clusters have 2 sources of control
to ensure HA.
Choosing the right deployment model depends on the isolation, performance,
and HA requirements for your use case. This guide describes the various options and
considerations when configuring your Istio deployment.
## Cluster models
The workload instances of your application run in one or more
clusters. For isolation, performance, and
high availability, you can confine clusters to availability zones and regions.
Production systems, depending on their requirements, can run across multiple
clusters spanning a number of zones or regions, leveraging cloud load balancers
to handle things like locality and zonal or regional fail over.
In most cases, clusters represent boundaries for configuration and endpoint
discovery. For example, each Kubernetes cluster has an API Server which manages
the configuration for the cluster as well as serving
service endpoint information as pods are brought up
or down. Since Kubernetes configures this behavior on a per-cluster basis, this
approach helps limit the potential problems caused by incorrect configurations.
In Istio, you can configure a single service mesh to span any number of
clusters.
### Single cluster
In the simplest case, you can confine an Istio mesh to a single
cluster. A cluster usually operates over a
[single network](#single-network), but it varies between infrastructure
providers. A single cluster and single network model includes a control plane,
which results in the simplest Istio deployment.
Single cluster deployments offer simplicity, but lack other features, for
example, fault isolation and fail over. If you need higher availability, you
should use multiple clusters.
### Multiple clusters
You can configure a single mesh to include
multiple clusters. Using a
multicluster deployment within a single mesh affords
the following capabilities beyond that of a single cluster deployment:
- Fault isolation and fail over: `cluster-1` goes down, fail over to `cluster-2`.
- Location-aware routing and fail over: Send requests to the nearest service.
- Various [control plane models](#control-plane-models): Support different
levels of availability.
- Team or project isolation: Each team runs its own set of clusters.
Multicluster deployments give you a greater degree of isolation and
availability but increase complexity. If your systems have high availability
requirements, you likely need clusters across multiple zones and regions. You
can canary configuration changes or new binary releases in a single cluster,
where the configuration changes only affect a small amount of user traffic.
Additionally, if a cluster has a problem, you can temporarily route traffic to
nearby clusters until you address the issue.
You can configure inter-cluster communication based on the
[network](#network-models) and the options supported by your cloud provider. For
example, if two clusters reside on the same underlying network, you can enable
cross-cluster communication by simply configuring firewall rules.
Within a multicluster mesh, all services are shared by default, according to the
concept of namespace sameness.
[Traffic management rules](/docs/ops/configuration/traffic-management/multicluster)
provide fine-grained control over the behavior of multicluster traffic.
### DNS with multiple clusters
When a client application makes a request to some host, it must first perform a
DNS lookup for the hostname to obtain an IP address before it can proceed with
the request.
In Kubernetes, the DNS server residing within the cluster typically handles
this DNS lookup, based on the configured `Service` definitions.
Istio uses the virtual IP returned by the DNS lookup to load balance
across the list of active endpoints for the requested service, taking into account any
Istio configured routing rules.
Istio uses either Kubernetes `Service`/`Endpoint` or Istio `ServiceEntry` to
configure its internal mapping of hostname to workload IP addresses.
This two-tiered naming system becomes more complicated when you have multiple
clusters. Istio is inherently multicluster-aware, but Kubernetes is not
(today). Because of this, the client cluster must have a DNS entry for the
service in order for the DNS lookup to succeed, and a request to be
successfully sent. This is true even if there are no instances of that
service's pods running in the client cluster.
To ensure that DNS lookup succeeds, you must deploy a Kubernetes `Service` to
each cluster that consumes that service. This ensures that regardless of
where the request originates, it will pass DNS lookup and be handed to Istio
for proper routing.
This can also be achieved with Istio `ServiceEntry`, rather than Kubernetes
`Service`. However, a `ServiceEntry` does not configure the Kubernetes DNS server.
This means that DNS will need to be configured either manually or
with automated tooling such as the
[Address auto allocation](/docs/ops/configuration/traffic-management/dns-proxy/#address-auto-allocation)
feature of [Istio DNS Proxying](/docs/ops/configuration/traffic-management/dns-proxy/).
There are a few efforts in progress that will help simplify the DNS story:
- [Admiral](https://github.com/istio-ecosystem/admiral) is an Istio community
project that provides a number of multicluster capabilities. If you need to support multi-network
topologies, managing this configuration across multiple clusters at scale is challenging.
Admiral takes an opinionated view on this configuration and provides automatic provisioning and
synchronization across clusters.
- [Kubernetes Multi-Cluster Services](https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api)
is a Kubernetes Enhancement Proposal (KEP) that defines an API for exporting
services to multiple clusters. This effectively pushes the responsibility of
service visibility and DNS resolution for the entire `clusterset` onto
Kubernetes. There is also work in progress to build layers of `MCS` support
into Istio, which would allow Istio to work with any cloud vendor `MCS`
controller or even act as the `MCS` controller for the entire mesh.
## Network models
Istio uses a simplified definition of network to
refer to workload instances that have direct
reachability. For example, by default all workload instances in a single
cluster are on the same network.
Many production systems require multiple networks or subnets for isolation
and high availability. Istio supports spanning a service mesh over a variety of
network topologies. This approach allows you to select the network model that
fits your existing network topology.
### Single network
In the simplest case, a service mesh operates over a single fully connected
network. In a single network model, all
workload instances
can reach each other directly without an Istio gateway.
A single network allows Istio to configure service consumers in a uniform
way across the mesh with the ability to directly address workload instances.
Note, however, that for a single network across multiple clusters,
services and endpoints cannot have overlapping IP addresses.
### Multiple networks
You can span a single service mesh across multiple networks; such a
configuration is known as **multi-network**.
Multiple networks afford the following capabilities beyond that of single networks:
- Overlapping IP or VIP ranges for **service endpoints**
- Crossing of administrative boundaries
- Fault tolerance
- Scaling of network addresses
- Compliance with standards that require network segmentation
In this model, the workload instances in different networks can only reach each
other through one or more [Istio gateways](/docs/concepts/traffic-management/#gateways).
Istio uses **partitioned service discovery** to provide consumers a different
view of service endpoints. The view depends on the
network of the consumers.
This solution requires exposing all services (or a subset) through the gateway.
Cloud vendors may provide options that will not require exposing services on
the public internet. Such an option, if it exists and meets your requirements,
will likely be the best choice.
In order to ensure secure communications in a multi-network scenario, Istio
only supports cross-network communication to workloads with an Istio proxy.
This is due to the fact that Istio exposes services at the Ingress Gateway with TLS
pass-through, which enables mTLS directly to the workload. A workload without
an Istio proxy, however, will likely not be able to participate in mutual
authentication with other workloads. For this reason, Istio filters
out-of-network endpoints for proxyless services.
## Control plane models
An Istio mesh uses the control plane to configure all
communication between workload instances within the mesh. Workload instances
connect to a control plane instance to get their configuration.
In the simplest case, you can run your mesh with a control plane on a single
cluster.
A cluster like this one, with its own local control plane, is referred to as
a primary cluster.
Multicluster deployments can also share control plane instances. In this case,
the control plane instances can reside in one or more primary clusters.
Clusters without their own control plane are referred to as
remote clusters.
To support remote clusters in a multicluster mesh, the control plane in
a primary cluster must be accessible via a stable IP (e.g., a cluster IP).
For clusters spanning networks,
this can be achieved by exposing the control plane through an Istio gateway.
Cloud vendors may provide options, such as internal load balancers, for
providing this capability without exposing the control plane on the
public internet. Such an option, if it exists and meets your requirements,
will likely be the best choice.
In multicluster deployments with more than one primary cluster, each primary
cluster receives its configuration (i.e., `Service` and `ServiceEntry`,
`DestinationRule`, etc.) from the Kubernetes API Server residing in the same
cluster. Each primary cluster, therefore, has an independent source of
configuration.
This duplication of configuration across primary clusters does require
additional steps when rolling out changes. Large production
systems may automate this process with tooling, such as CI/CD systems, in
order to manage configuration rollout.
Instead of running control planes in primary clusters inside the mesh, a
service mesh composed entirely of remote clusters can be controlled by an
external control plane. This provides isolated
management and complete separation of the control plane deployment from the
data plane services that comprise the mesh.
A cloud vendor's managed control plane is a
typical example of an external control plane.
For high availability, you should deploy multiple control planes across
clusters, zones, or regions.
This model affords the following benefits:
- Improved availability: If a control plane becomes unavailable, the scope of
the outage is limited to only workloads in clusters managed by that control plane.
- Configuration isolation: You can make configuration changes in one cluster,
zone, or region without impacting others.
- Controlled rollout: You have more fine-grained control over configuration
rollout (e.g., one cluster at a time). You can also canary configuration changes in a sub-section of the mesh
controlled by a given primary cluster.
- Selective service visibility: You can restrict service visibility to part
of the mesh, helping to establish service-level isolation. For example, an
administrator may choose to deploy the `HelloWorld` service to Cluster A,
but not Cluster B. Any attempt to call `HelloWorld` from Cluster B will
fail the DNS lookup.
The following list ranks control plane deployment examples by availability:
- One cluster per region (**lowest availability**)
- Multiple clusters per region
- One cluster per zone
- Multiple clusters per zone
- Each cluster (**highest availability**)
### Endpoint discovery with multiple control planes
An Istio control plane manages traffic within the mesh by providing each proxy
with the list of service endpoints. In order to make this work in a
multicluster scenario, each control plane must observe endpoints from the API
Server in every cluster.
To enable endpoint discovery for a cluster, an administrator generates a
`remote secret` and deploys it to each primary cluster in the mesh. The
`remote secret` contains credentials, granting access to the API server in the
cluster.
The control planes will then connect and discover the service endpoints for
the cluster, enabling cross-cluster load balancing for these services.
By default, Istio will load balance requests evenly between endpoints in
each cluster. In large systems that span geographic regions, it may be
desirable to use [locality load balancing](/docs/tasks/traffic-management/locality-load-balancing)
to prefer that traffic stay in the same zone or region.
In some advanced scenarios, load balancing across clusters may not be desired.
For example, in a blue/green deployment, you may deploy different versions of
the system to different clusters. In this case, each cluster is effectively
operating as an independent mesh. This behavior can be achieved in a couple of
ways:
- Do not exchange remote secrets between the clusters. This offers the
strongest isolation between the clusters.
- Use `VirtualService` and `DestinationRule` to disallow routing between two
versions of the services.
In either case, cross-cluster load balancing is prevented. External traffic
can be routed to one cluster or the other using an external load balancer.
## Identity and trust models
When a workload instance is created within a service mesh, Istio assigns the
workload an identity.
The Certificate Authority (CA) creates and signs the certificates used to verify
the identities used within the mesh. You can verify the identity of the message sender
with the public key of the CA that created and signed the certificate
for that identity. A **trust bundle** is the set of all CA public keys used by
an Istio mesh. With a mesh's trust bundle, anyone can verify the sender of any
message coming from that mesh.
### Trust within a mesh
Within a single Istio mesh, Istio ensures each workload instance has an
appropriate certificate representing its own identity, and the trust bundle
necessary to recognize all identities within the mesh and any federated meshes.
The CA creates and signs the certificates for those identities. This model
allows workload instances in the mesh to authenticate each other when
communicating.
### Trust between meshes
To enable communication between two meshes with different CAs, you must
exchange the trust bundles of the meshes. Istio does not provide any tooling
to exchange trust bundles across meshes. You can exchange the trust bundles
either manually or automatically using a protocol such as [SPIFFE Trust Domain Federation](https://github.com/spiffe/spiffe/blob/main/standards/SPIFFE_Federation.md).
Once you import a trust bundle to a mesh, you can configure local policies for
those identities.
## Mesh models
Istio supports having all of your services in a
mesh, or federating multiple meshes
together, which is also known as multi-mesh.
### Single mesh
The simplest Istio deployment is a single mesh. Within a mesh, service names are
unique. For example, only one service can have the name `mysvc` in the `foo`
namespace. Additionally, workload instances share a common identity since
service account names are unique within a namespace, just like service names.
A single mesh can span [one or more clusters](#cluster-models) and
[one or more networks](#network-models). Within a mesh,
[namespaces](#namespace-tenancy) are used for [tenancy](#tenancy-models).
### Multiple meshes
Multiple mesh deployments result from mesh federation.
Multiple meshes afford the following capabilities beyond that of a single mesh:
- Organizational boundaries: lines of business
- Service name or namespace reuse: multiple distinct uses of the `default`
namespace
- Stronger isolation: isolating test workloads from production workloads
You can enable inter-mesh communication with mesh federation. When federating, each mesh can expose a set of services and
identities, which all participating meshes can recognize.
To avoid service naming collisions, you can give each mesh a globally unique
**mesh ID**, to ensure that the fully qualified domain
name (FQDN) for each service is distinct.
When federating two meshes that do not share the same
trust domain, you must federate
identity and **trust bundles** between them. See the
section on [Trust between meshes](#trust-between-meshes) for more details.
## Tenancy models
In Istio, a **tenant** is a group of users that share
common access and privileges for a set of deployed workloads.
Tenants can be used to provide a level of isolation between different teams.
You can configure tenancy models to satisfy the following organizational
requirements for isolation:
- Security
- Policy
- Capacity
- Cost
- Performance
Istio supports three types of tenancy models:
- [Namespace tenancy](#namespace-tenancy)
- [Cluster tenancy](#cluster-tenancy)
- [Mesh tenancy](#mesh-tenancy)
### Namespace tenancy
A cluster can be shared across multiple teams, each using a different namespace.
You can grant a team permission to deploy its workloads only to a given namespace
or set of namespaces.
By default, services from multiple namespaces can communicate with each other,
but you can increase isolation by selectively choosing which services to expose to other
namespaces. You can configure authorization policies for exposed services to restrict
access to only the appropriate callers.
Namespace tenancy can extend beyond a single cluster.
When using [multiple clusters](#multiple-clusters), the namespaces in each
cluster sharing the same name are considered the same namespace by default.
For example, `Service B` in the `Team-1` namespace of cluster `West` and `Service B` in the
`Team-1` namespace of cluster `East` refer to the same service, and Istio merges their
endpoints for service discovery and load balancing.
### Cluster tenancy
Istio supports using clusters as a unit of tenancy. In this case, you can give
each team a dedicated cluster or set of clusters to deploy their
workloads. Permissions for a cluster are usually limited to the members of the
team that owns it. You can set various roles for finer grained control, for
example:
- Cluster administrator
- Developer
To use cluster tenancy with Istio, you configure each team's cluster with its
own control plane, allowing each team to manage its own configuration.
Alternatively, you can use Istio to implement a group of clusters as a single tenant
using remote clusters or multiple
synchronized primary clusters.
Refer to [control plane models](#control-plane-models) for details.
### Mesh Tenancy
In a multi-mesh deployment with mesh federation, each mesh
can be used as the unit of isolation.
Since a different team or organization operates each mesh, service naming
is rarely distinct. For example, a `Service C` in the `foo` namespace of
cluster `Team-1` and the `Service C` service in the `foo` namespace of cluster
`Team-2` will not refer to the same service. The most common example is the
scenario in Kubernetes where many teams deploy their workloads to the `default`
namespace.
When each team has its own mesh, cross-mesh communication follows the
concepts described in the [multiple meshes](#multiple-meshes) model. | istio | title Deployment Models description Describes the options and considerations when configuring your Istio deployment weight 20 keywords single cluster multiple clusters control plane tenancy networks identity trust single mesh multiple meshes aliases docs concepts multicluster deployments docs concepts deployment models docs ops prep deployment models owner istio wg environments maintainers test n a When configuring a production deployment of Istio you need to answer a number of questions Will the mesh be confined to a single cluster or distributed across multiple clusters Will all the services be located in a single fully connected network or will gateways be required to connect services across multiple networks Is there a single control plane potentially shared across clusters or are there multiple control planes deployed to ensure high availability HA Are all clusters going to be connected into a single multicluster service mesh or will they be federated into a multi mesh deployment All of these questions among others represent independent dimensions of configuration for an Istio deployment 1 single or multiple cluster 1 single or multiple network 1 single or multiple control plane 1 single or multiple mesh In a production environment involving multiple clusters you can use a mix of deployment models For example having more than one control plane is recommended for HA but you could achieve this for a 3 cluster deployment by deploying 2 clusters with a single shared control plane and then adding the third cluster with a second control plane in a different network All three clusters could then be configured to share both control planes so that all the clusters have 2 sources of control to ensure HA Choosing the right deployment model depends on the isolation performance and HA requirements for your use case This guide describes the various options and considerations when configuring your Istio deployment Cluster models The workload instances of your application run in one or more clusters For isolation performance and high availability you can confine clusters to availability zones and regions Production systems depending on their requirements can run across multiple clusters spanning a number of zones or regions leveraging cloud load balancers to handle things like locality and zonal or regional fail over In most cases clusters represent boundaries for configuration and endpoint discovery For example each Kubernetes cluster has an API Server which manages the configuration for the cluster as well as serving service endpoint information as pods are brought up or down Since Kubernetes configures this behavior on a per cluster basis this approach helps limit the potential problems caused by incorrect configurations In Istio you can configure a single service mesh to span any number of clusters Single cluster In the simplest case you can confine an Istio mesh to a single cluster A cluster usually operates over a single network single network but it varies between infrastructure providers A single cluster and single network model includes a control plane which results in the simplest Istio deployment Single cluster deployments offer simplicity but lack other features for example fault isolation and fail over If you need higher availability you should use multiple clusters Multiple clusters You can configure a single mesh to include multiple clusters Using a multicluster deployment within a single mesh affords the following capabilities beyond that of a single cluster deployment Fault isolation and fail over cluster 1 goes down fail over to cluster 2 Location aware routing and fail over Send requests to the nearest service Various control plane models control plane models Support different levels of availability Team or project isolation Each team runs its own set of clusters Multicluster deployments give you a greater degree of isolation and availability but increase complexity If your systems have high availability requirements you likely need clusters across multiple zones and regions You can canary configuration changes or new binary releases in a single cluster where the configuration changes only affect a small amount of user traffic Additionally if a cluster has a problem you can temporarily route traffic to nearby clusters until you address the issue You can configure inter cluster communication based on the network network models and the options supported by your cloud provider For example if two clusters reside on the same underlying network you can enable cross cluster communication by simply configuring firewall rules Within a multicluster mesh all services are shared by default according to the concept of namespace sameness Traffic management rules docs ops configuration traffic management multicluster provide fine grained control over the behavior of multicluster traffic DNS with multiple clusters When a client application makes a request to some host it must first perform a DNS lookup for the hostname to obtain an IP address before it can proceed with the request In Kubernetes the DNS server residing within the cluster typically handles this DNS lookup based on the configured Service definitions Istio uses the virtual IP returned by the DNS lookup to load balance across the list of active endpoints for the requested service taking into account any Istio configured routing rules Istio uses either Kubernetes Service Endpoint or Istio ServiceEntry to configure its internal mapping of hostname to workload IP addresses This two tiered naming system becomes more complicated when you have multiple clusters Istio is inherently multicluster aware but Kubernetes is not today Because of this the client cluster must have a DNS entry for the service in order for the DNS lookup to succeed and a request to be successfully sent This is true even if there are no instances of that service s pods running in the client cluster To ensure that DNS lookup succeeds you must deploy a Kubernetes Service to each cluster that consumes that service This ensures that regardless of where the request originates it will pass DNS lookup and be handed to Istio for proper routing This can also be achieved with Istio ServiceEntry rather than Kubernetes Service However a ServiceEntry does not configure the Kubernetes DNS server This means that DNS will need to be configured either manually or with automated tooling such as the Address auto allocation docs ops configuration traffic management dns proxy address auto allocation feature of Istio DNS Proxying docs ops configuration traffic management dns proxy There are a few efforts in progress that will help simplify the DNS story Admiral https github com istio ecosystem admiral is an Istio community project that provides a number of multicluster capabilities If you need to support multi network topologies managing this configuration across multiple clusters at scale is challenging Admiral takes an opinionated view on this configuration and provides automatic provisioning and synchronization across clusters Kubernetes Multi Cluster Services https github com kubernetes enhancements tree master keps sig multicluster 1645 multi cluster services api is a Kubernetes Enhancement Proposal KEP that defines an API for exporting services to multiple clusters This effectively pushes the responsibility of service visibility and DNS resolution for the entire clusterset onto Kubernetes There is also work in progress to build layers of MCS support into Istio which would allow Istio to work with any cloud vendor MCS controller or even act as the MCS controller for the entire mesh Network models Istio uses a simplified definition of network to refer to workload instances that have direct reachability For example by default all workload instances in a single cluster are on the same network Many production systems require multiple networks or subnets for isolation and high availability Istio supports spanning a service mesh over a variety of network topologies This approach allows you to select the network model that fits your existing network topology Single network In the simplest case a service mesh operates over a single fully connected network In a single network model all workload instances can reach each other directly without an Istio gateway A single network allows Istio to configure service consumers in a uniform way across the mesh with the ability to directly address workload instances Note however that for a single network across multiple clusters services and endpoints cannot have overlapping IP addresses Multiple networks You can span a single service mesh across multiple networks such a configuration is known as multi network Multiple networks afford the following capabilities beyond that of single networks Overlapping IP or VIP ranges for service endpoints Crossing of administrative boundaries Fault tolerance Scaling of network addresses Compliance with standards that require network segmentation In this model the workload instances in different networks can only reach each other through one or more Istio gateways docs concepts traffic management gateways Istio uses partitioned service discovery to provide consumers a different view of service endpoints The view depends on the network of the consumers This solution requires exposing all services or a subset through the gateway Cloud vendors may provide options that will not require exposing services on the public internet Such an option if it exists and meets your requirements will likely be the best choice In order to ensure secure communications in a multi network scenario Istio only supports cross network communication to workloads with an Istio proxy This is due to the fact that Istio exposes services at the Ingress Gateway with TLS pass through which enables mTLS directly to the workload A workload without an Istio proxy however will likely not be able to participate in mutual authentication with other workloads For this reason Istio filters out of network endpoints for proxyless services Control plane models An Istio mesh uses the control plane to configure all communication between workload instances within the mesh Workload instances connect to a control plane instance to get their configuration In the simplest case you can run your mesh with a control plane on a single cluster A cluster like this one with its own local control plane is referred to as a primary cluster Multicluster deployments can also share control plane instances In this case the control plane instances can reside in one or more primary clusters Clusters without their own control plane are referred to as remote clusters To support remote clusters in a multicluster mesh the control plane in a primary cluster must be accessible via a stable IP e g a cluster IP For clusters spanning networks this can be achieved by exposing the control plane through an Istio gateway Cloud vendors may provide options such as internal load balancers for providing this capability without exposing the control plane on the public internet Such an option if it exists and meets your requirements will likely be the best choice In multicluster deployments with more than one primary cluster each primary cluster receives its configuration i e Service and ServiceEntry DestinationRule etc from the Kubernetes API Server residing in the same cluster Each primary cluster therefore has an independent source of configuration This duplication of configuration across primary clusters does require additional steps when rolling out changes Large production systems may automate this process with tooling such as CI CD systems in order to manage configuration rollout Instead of running control planes in primary clusters inside the mesh a service mesh composed entirely of remote clusters can be controlled by an external control plane This provides isolated management and complete separation of the control plane deployment from the data plane services that comprise the mesh A cloud vendor s managed control plane is a typical example of an external control plane For high availability you should deploy multiple control planes across clusters zones or regions This model affords the following benefits Improved availability If a control plane becomes unavailable the scope of the outage is limited to only workloads in clusters managed by that control plane Configuration isolation You can make configuration changes in one cluster zone or region without impacting others Controlled rollout You have more fine grained control over configuration rollout e g one cluster at a time You can also canary configuration changes in a sub section of the mesh controlled by a given primary cluster Selective service visibility You can restrict service visibility to part of the mesh helping to establish service level isolation For example an administrator may choose to deploy the HelloWorld service to Cluster A but not Cluster B Any attempt to call HelloWorld from Cluster B will fail the DNS lookup The following list ranks control plane deployment examples by availability One cluster per region lowest availability Multiple clusters per region One cluster per zone Multiple clusters per zone Each cluster highest availability Endpoint discovery with multiple control planes An Istio control plane manages traffic within the mesh by providing each proxy with the list of service endpoints In order to make this work in a multicluster scenario each control plane must observe endpoints from the API Server in every cluster To enable endpoint discovery for a cluster an administrator generates a remote secret and deploys it to each primary cluster in the mesh The remote secret contains credentials granting access to the API server in the cluster The control planes will then connect and discover the service endpoints for the cluster enabling cross cluster load balancing for these services By default Istio will load balance requests evenly between endpoints in each cluster In large systems that span geographic regions it may be desirable to use locality load balancing docs tasks traffic management locality load balancing to prefer that traffic stay in the same zone or region In some advanced scenarios load balancing across clusters may not be desired For example in a blue green deployment you may deploy different versions of the system to different clusters In this case each cluster is effectively operating as an independent mesh This behavior can be achieved in a couple of ways Do not exchange remote secrets between the clusters This offers the strongest isolation between the clusters Use VirtualService and DestinationRule to disallow routing between two versions of the services In either case cross cluster load balancing is prevented External traffic can be routed to one cluster or the other using an external load balancer Identity and trust models When a workload instance is created within a service mesh Istio assigns the workload an identity The Certificate Authority CA creates and signs the certificates used to verify the identities used within the mesh You can verify the identity of the message sender with the public key of the CA that created and signed the certificate for that identity A trust bundle is the set of all CA public keys used by an Istio mesh With a mesh s trust bundle anyone can verify the sender of any message coming from that mesh Trust within a mesh Within a single Istio mesh Istio ensures each workload instance has an appropriate certificate representing its own identity and the trust bundle necessary to recognize all identities within the mesh and any federated meshes The CA creates and signs the certificates for those identities This model allows workload instances in the mesh to authenticate each other when communicating Trust between meshes To enable communication between two meshes with different CAs you must exchange the trust bundles of the meshes Istio does not provide any tooling to exchange trust bundles across meshes You can exchange the trust bundles either manually or automatically using a protocol such as SPIFFE Trust Domain Federation https github com spiffe spiffe blob main standards SPIFFE Federation md Once you import a trust bundle to a mesh you can configure local policies for those identities Mesh models Istio supports having all of your services in a mesh or federating multiple meshes together which is also known as multi mesh Single mesh The simplest Istio deployment is a single mesh Within a mesh service names are unique For example only one service can have the name mysvc in the foo namespace Additionally workload instances share a common identity since service account names are unique within a namespace just like service names A single mesh can span one or more clusters cluster models and one or more networks network models Within a mesh namespaces namespace tenancy are used for tenancy tenancy models Multiple meshes Multiple mesh deployments result from mesh federation Multiple meshes afford the following capabilities beyond that of a single mesh Organizational boundaries lines of business Service name or namespace reuse multiple distinct uses of the default namespace Stronger isolation isolating test workloads from production workloads You can enable inter mesh communication with mesh federation When federating each mesh can expose a set of services and identities which all participating meshes can recognize To avoid service naming collisions you can give each mesh a globally unique mesh ID to ensure that the fully qualified domain name FQDN for each service is distinct When federating two meshes that do not share the same trust domain you must federate identity and trust bundles between them See the section on Trust between meshes trust between meshes for more details Tenancy models In Istio a tenant is a group of users that share common access and privileges for a set of deployed workloads Tenants can be used to provide a level of isolation between different teams You can configure tenancy models to satisfy the following organizational requirements for isolation Security Policy Capacity Cost Performance Istio supports three types of tenancy models Namespace tenancy namespace tenancy Cluster tenancy cluster tenancy Mesh tenancy mesh tenancy Namespace tenancy A cluster can be shared across multiple teams each using a different namespace You can grant a team permission to deploy its workloads only to a given namespace or set of namespaces By default services from multiple namespaces can communicate with each other but you can increase isolation by selectively choosing which services to expose to other namespaces You can configure authorization policies for exposed services to restrict access to only the appropriate callers Namespace tenancy can extend beyond a single cluster When using multiple clusters multiple clusters the namespaces in each cluster sharing the same name are considered the same namespace by default For example Service B in the Team 1 namespace of cluster West and Service B in the Team 1 namespace of cluster East refer to the same service and Istio merges their endpoints for service discovery and load balancing Cluster tenancy Istio supports using clusters as a unit of tenancy In this case you can give each team a dedicated cluster or set of clusters to deploy their workloads Permissions for a cluster are usually limited to the members of the team that owns it You can set various roles for finer grained control for example Cluster administrator Developer To use cluster tenancy with Istio you configure each team s cluster with its own control plane allowing each team to manage its own configuration Alternatively you can use Istio to implement a group of clusters as a single tenant using remote clusters or multiple synchronized primary clusters Refer to control plane models control plane models for details Mesh Tenancy In a multi mesh deployment with mesh federation each mesh can be used as the unit of isolation Since a different team or organization operates each mesh service naming is rarely distinct For example a Service C in the foo namespace of cluster Team 1 and the Service C service in the foo namespace of cluster Team 2 will not refer to the same service The most common example is the scenario in Kubernetes where many teams deploy their workloads to the default namespace When each team has its own mesh cross mesh communication follows the concepts described in the multiple meshes multiple meshes model |
istio Describes Istio s security model title Security Model owner istio wg security maintainers This document aims to describe the security posture of Istio s various components and how possible attacks can impact the system weight 10 test n a | ---
title: Security Model
description: Describes Istio's security model.
weight: 10
owner: istio/wg-security-maintainers
test: n/a
---
This document aims to describe the security posture of Istio's various components, and how possible attacks can impact the system.
## Components
Istio comes with a variety of optional components that will be covered here.
For a high level overview, see [Istio Architecture](/docs/ops/deployment/architecture/).
Note that Istio deployments are highly flexible; below, we will primarily assume the worst case scenarios.
### Istiod
Istiod serves as the core control plane component of Istio, often serving the role of the [XDS serving component](/docs/concepts/traffic-management/) as well
as the mesh [mTLS Certificate Authority](/docs/concepts/security/).
Istiod is considered a highly privileged component, similar to that of the Kubernetes API server itself.
* It has high Kubernetes RBAC privileges, typically including `Secret` read access and webhook write access.
* When acting as the CA, it can provision arbitrary certificates.
* When acting as the XDS control plane, it can program proxies to perform arbitrary behavior.
As such, the security of the cluster is tightly coupled to the security of Istiod.
Following [Kubernetes security best practices](https://kubernetes.io/docs/concepts/security/) around Istiod access is paramount.
### Istio CNI plugin
Istio can optionally be deployed with the [Istio CNI Plugin `DaemonSet`](/docs/setup/additional-setup/cni/).
This `DaemonSet` is responsible for setting up networking rules in Istio to ensure traffic is transparently redirected as needed.
This is an alternative to the `istio-init` container discussed [below](#sidecar-proxies).
Because the CNI `DaemonSet` modifies networking rules on the node, it requires an elevated `securityContext`.
However, unlike [Istiod](#istiod), this is a **node-local** privilege.
The implications of this are discussed [below](#node-compromise).
Because this consolidates the elevated privileges required to setup networking into a single pod, rather than *every* pod,
this option is generally recommended.
### Sidecar Proxies
Istio may [optionally](/docs/overview/dataplane-modes/) deploy a sidecar proxy next to an application.
The sidecar proxy needs the network to be programmed to direct all traffic through the proxy.
This can be done with the [Istio CNI plugin](#istio-cni-plugin) or by deploying an `initContainer` (`istio-init`) on the pod (this is done automatically if the CNI plugin is not deployed).
The `istio-init` container requires `NET_ADMIN` and `NET_RAW` capabilities.
However, these capabilities are only present during the initialization - the primary sidecar container is completely unprivileged.
Additionally, the sidecar proxy does not require any associated Kubernetes RBAC privileges at all.
Each sidecar proxy is authorized to request a certificate for the associated Pod Service Account.
### Gateways and Waypoints
Gateways and Waypoints act as standalone proxy deployments.
Unlike [sidecars](#sidecar-proxies), they do not require any networking modifications, and thus don't require any privilege.
These components run with their own service accounts, distinct from application identities.
### Ztunnel
Ztunnel acts as a node-level proxy.
This task requires the `NET_ADMIN`, `SYS_ADMIN`, and `NET_RAW` capabilities.
Like the [Istio CNI Plugin](#istio-cni-plugin), these are **node-local** privileges only.
The Ztunnel does not have any associated Kubernetes RBAC privileges.
Ztunnel is authorized to request certificates for any Service Accounts of pods running on the same node.
Similar to [kubelet](https://kubernetes.io/docs/reference/access-authn-authz/node/), this explicitly does not allow requesting arbitrary
certificates.
This, again, ensures these privileges are **node-local** only.
## Traffic Capture Properties
When a pod is enrolled in the mesh, all incoming TCP traffic will be redirected to the proxy.
This includes both mTLS/HBONE traffic and plaintext traffic.
Any applicable [policies](/docs/tasks/security/authorization/) for the workload will be enforced before forwarding the traffic to the workload.
However, Istio does not currently guarantee that _outgoing_ traffic is redirect to the proxy.
See [traffic capture limitations](/docs/ops/best-practices/security/#understand-traffic-capture-limitations).
As such, care must be taken to follow the [securing egress traffic](/docs/ops/best-practices/security/#securing-egress-traffic) steps if outbound policies are required.
## Mutual TLS Properties
[Mutual TLS](/docs/concepts/security/#mutual-tls-authentication) provides the basis for much of Istio's security posture.
Below explains various properties mutual TLS provides for the security posture of Istio.
### Certificate Authority
Istio comes out of the box with its own Certificate Authority.
By default, the CA allows authenticating clients based on either of the options below:
* A Kubernetes JWT token, with an audience of `istio-ca`, verified with a Kubernetes `TokenReview`. This is the default method in Kubernetes Pods.
* An existing mutual TLS certificate.
* Custom JWT tokens, verified using OIDC (requires configuration).
The CA will only issue certificates that are requested for identities that a client is authenticated for.
Istio can also integrate with a variety of third party CAs; please refer to any of their security documentation for more information on how they behave.
### Client mTLS
In sidecar mode, the client sidecar will [automatically use TLS](/docs/ops/configuration/traffic-management/tls-configuration/#auto-mtls) when connecting to a service
that is detected to support mTLS. This can also be [explicitly configured](/docs/ops/configuration/traffic-management/tls-configuration/#sidecars).
Note that this automatic detection relies on Istio associating the traffic to a Service.
[Unsupported traffic types](/docs/ops/configuration/traffic-management/traffic-routing/#unmatched-traffic) or [configuration scoping](/docs/ops/configuration/mesh/configuration-scoping/) can prevent this.
When [connecting to a backend](/docs/concepts/security/#secure-naming), the set of allowed identities is computed, at the Service level, based on the union of all backend's identities.
In ambient mode, Istio will automatically use mTLS when connecting to any backend that supports mTLS, and verify the identity of the destination matches the identity the workload is expected to be running as.
These properties differ from sidecar mode in that they are properties of individual workloads, rather than of the service.
This enables more fine-grained authentication checks, as well as supporting a wider variety of workloads.
### Server mTLS
By default, Istio will accept mTLS and non-mTLS traffic (often called "permissive mode").
Users can opt-in to strict enforcement by writing `PeerAuthentication` or `AuthorizationPolicy` rules requiring mTLS.
When mTLS connections are established, the peer certificate is verified.
Additionally, the peer identity is verified to be within the same trust domain.
To verify only specific identities are allowed, an `AuthorizationPolicy` can be used.
## Compromise types explored
Based on the above overview, we will consider the impact on the cluster if various parts of the system are compromised.
In the real world, there are a variety of different variables around any security attack:
* How easy it is to execute
* What prior privileges are required
* How often it can exploited
* What the impact is (total remote execution, denial of service, etc).
In this document, we will primarily consider the worst case scenario: a compromised component means an attacker has complete remote code execution capabilities.
### Workload compromise
In this scenario, an application workload (pod) is compromised.
A pod [*may* have access](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#opt-out-of-api-credential-automounting) to its service account token.
If so, a workload compromise can move laterally from a single pod to compromising the entire service account.
In the sidecar model, the proxy is co-located with the pod, and runs within the same trust boundary.
A compromised application can tamper with the proxy through the admin API or other surfaces, including exfiltration of private key material, allowing another agent to impersonate the workload.
It should be assumed that a compromised workload also includes a compromise of the sidecar proxy.
Given this, a compromised workload may:
* Send arbitrary traffic, with or without mutual TLS.
These may bypass any proxy configuration, or even the proxy entirely.
Note that Istio does not offer egress-based authorization policies, so there is no egress authorization policy bypass occurring.
* Accept traffic that was already destined to the application. It may bypass policies that were configured in the sidecar proxy.
The key takeaway here is that while the compromised workload may behave maliciously, this does not give them the ability to bypass policies in _other_ workloads.
In ambient mode, the node proxy is not co-located within the pod, and runs in another trust boundary as part of an independent pod.
A compromised application may send arbitrary traffic.
However, they do not have control over the node proxy, which will chose how to handle incoming and outbound traffic.
Additionally, as the pod itself doesn't have access to a service account token to request a mutual TLS certificate, lateral movement possibilities are reduced.
Istio offers a variety of features that can limit the impact of such a compromise:
* [Observability](/docs/tasks/observability/) features can be used to identify the attack.
* [Policies](/docs/tasks/security/authorization/) can be used to restrict what type of traffic a workload can send or receive.
### Proxy compromise - Sidecars
In this scenario, a sidecar proxy is compromised.
Because the sidecar and application reside in the same trust domain, this is functionally equivalent to the [Workload compromise](#workload-compromise).
### Proxy compromise - Waypoint
In this scenario, a [waypoint proxy](#gateways-and-waypoints) is compromised.
While waypoints do not have any privileges for a hacker to exploit, they do serve (potentially) many different services and workloads.
A compromised waypoint will receive all traffic for these, which it can view, modify, or drop.
Istio offers the flexibility of [configuring the granularity of a waypoint deployment](/docs/ambient/usage/waypoint/#useawaypoint).
Users may consider deploying more isolated waypoints if they require stronger isolation.
Because waypoints run with a distinct identity from the applications they serve, a compromised waypoint does not imply the user's applications can be impersonated.
### Proxy compromise - Ztunnel
In this scenario, a [ztunnel](#ztunnel) proxy is compromised.
A compromised ztunnel gives the attacker control of the networking of the node.
Ztunnel has access to private key material for each application running on it's node.
A compromised ztunnel could have these exfiltrated and used elsewhere.
However, lateral movement to identities beyond co-located workloads is not possible; each ztunnel is only authorized to access certificates for workloads running on its node, scoping the blast radius of a compromised ztunnel.
### Node compromise
In this scenario, the Kubernetes Node is compromised.
Both [Kubernetes](https://kubernetes.io/docs/reference/access-authn-authz/node/) and Istio are designed to limit the blast radius of a single node compromise, such that
the compromise of a single node does not lead to a [cluster-wide compromise](#cluster-api-server-compromise).
However, the attack does have complete control over any workloads running on that node.
For instance, it can compromise any co-located [waypoints](#proxy-compromise---waypoint), the local [ztunnel](#proxy-compromise---ztunnel), any [sidecars](#proxy-compromise---sidecars), any co-located [Istiod instances](#istiod-compromise), etc.
### Cluster (API Server) compromise
A compromise of the Kubernetes API Server effectively means the entire cluster and mesh are compromised.
Unlike most other attack vectors, there isn't much Istio can do to control the blast radius of such an attack.
A compromised API Server gives a hacker complete control over the cluster, including actions such as running `kubectl exec` on arbitrary pods,
removing any Istio `AuthorizationPolicies`, or even uninstalling Istio entirely.
### Istiod compromise
A compromise of Istiod generally leads to the same result as an [API Server compromise](#cluster-api-server-compromise).
Istiod is a highly privileged component that should be strongly protected.
Following the [security best practices](/docs/ops/best-practices/security) is crucial to maintaining a secure cluster. | istio | title Security Model description Describes Istio s security model weight 10 owner istio wg security maintainers test n a This document aims to describe the security posture of Istio s various components and how possible attacks can impact the system Components Istio comes with a variety of optional components that will be covered here For a high level overview see Istio Architecture docs ops deployment architecture Note that Istio deployments are highly flexible below we will primarily assume the worst case scenarios Istiod Istiod serves as the core control plane component of Istio often serving the role of the XDS serving component docs concepts traffic management as well as the mesh mTLS Certificate Authority docs concepts security Istiod is considered a highly privileged component similar to that of the Kubernetes API server itself It has high Kubernetes RBAC privileges typically including Secret read access and webhook write access When acting as the CA it can provision arbitrary certificates When acting as the XDS control plane it can program proxies to perform arbitrary behavior As such the security of the cluster is tightly coupled to the security of Istiod Following Kubernetes security best practices https kubernetes io docs concepts security around Istiod access is paramount Istio CNI plugin Istio can optionally be deployed with the Istio CNI Plugin DaemonSet docs setup additional setup cni This DaemonSet is responsible for setting up networking rules in Istio to ensure traffic is transparently redirected as needed This is an alternative to the istio init container discussed below sidecar proxies Because the CNI DaemonSet modifies networking rules on the node it requires an elevated securityContext However unlike Istiod istiod this is a node local privilege The implications of this are discussed below node compromise Because this consolidates the elevated privileges required to setup networking into a single pod rather than every pod this option is generally recommended Sidecar Proxies Istio may optionally docs overview dataplane modes deploy a sidecar proxy next to an application The sidecar proxy needs the network to be programmed to direct all traffic through the proxy This can be done with the Istio CNI plugin istio cni plugin or by deploying an initContainer istio init on the pod this is done automatically if the CNI plugin is not deployed The istio init container requires NET ADMIN and NET RAW capabilities However these capabilities are only present during the initialization the primary sidecar container is completely unprivileged Additionally the sidecar proxy does not require any associated Kubernetes RBAC privileges at all Each sidecar proxy is authorized to request a certificate for the associated Pod Service Account Gateways and Waypoints Gateways and Waypoints act as standalone proxy deployments Unlike sidecars sidecar proxies they do not require any networking modifications and thus don t require any privilege These components run with their own service accounts distinct from application identities Ztunnel Ztunnel acts as a node level proxy This task requires the NET ADMIN SYS ADMIN and NET RAW capabilities Like the Istio CNI Plugin istio cni plugin these are node local privileges only The Ztunnel does not have any associated Kubernetes RBAC privileges Ztunnel is authorized to request certificates for any Service Accounts of pods running on the same node Similar to kubelet https kubernetes io docs reference access authn authz node this explicitly does not allow requesting arbitrary certificates This again ensures these privileges are node local only Traffic Capture Properties When a pod is enrolled in the mesh all incoming TCP traffic will be redirected to the proxy This includes both mTLS HBONE traffic and plaintext traffic Any applicable policies docs tasks security authorization for the workload will be enforced before forwarding the traffic to the workload However Istio does not currently guarantee that outgoing traffic is redirect to the proxy See traffic capture limitations docs ops best practices security understand traffic capture limitations As such care must be taken to follow the securing egress traffic docs ops best practices security securing egress traffic steps if outbound policies are required Mutual TLS Properties Mutual TLS docs concepts security mutual tls authentication provides the basis for much of Istio s security posture Below explains various properties mutual TLS provides for the security posture of Istio Certificate Authority Istio comes out of the box with its own Certificate Authority By default the CA allows authenticating clients based on either of the options below A Kubernetes JWT token with an audience of istio ca verified with a Kubernetes TokenReview This is the default method in Kubernetes Pods An existing mutual TLS certificate Custom JWT tokens verified using OIDC requires configuration The CA will only issue certificates that are requested for identities that a client is authenticated for Istio can also integrate with a variety of third party CAs please refer to any of their security documentation for more information on how they behave Client mTLS In sidecar mode the client sidecar will automatically use TLS docs ops configuration traffic management tls configuration auto mtls when connecting to a service that is detected to support mTLS This can also be explicitly configured docs ops configuration traffic management tls configuration sidecars Note that this automatic detection relies on Istio associating the traffic to a Service Unsupported traffic types docs ops configuration traffic management traffic routing unmatched traffic or configuration scoping docs ops configuration mesh configuration scoping can prevent this When connecting to a backend docs concepts security secure naming the set of allowed identities is computed at the Service level based on the union of all backend s identities In ambient mode Istio will automatically use mTLS when connecting to any backend that supports mTLS and verify the identity of the destination matches the identity the workload is expected to be running as These properties differ from sidecar mode in that they are properties of individual workloads rather than of the service This enables more fine grained authentication checks as well as supporting a wider variety of workloads Server mTLS By default Istio will accept mTLS and non mTLS traffic often called permissive mode Users can opt in to strict enforcement by writing PeerAuthentication or AuthorizationPolicy rules requiring mTLS When mTLS connections are established the peer certificate is verified Additionally the peer identity is verified to be within the same trust domain To verify only specific identities are allowed an AuthorizationPolicy can be used Compromise types explored Based on the above overview we will consider the impact on the cluster if various parts of the system are compromised In the real world there are a variety of different variables around any security attack How easy it is to execute What prior privileges are required How often it can exploited What the impact is total remote execution denial of service etc In this document we will primarily consider the worst case scenario a compromised component means an attacker has complete remote code execution capabilities Workload compromise In this scenario an application workload pod is compromised A pod may have access https kubernetes io docs tasks configure pod container configure service account opt out of api credential automounting to its service account token If so a workload compromise can move laterally from a single pod to compromising the entire service account In the sidecar model the proxy is co located with the pod and runs within the same trust boundary A compromised application can tamper with the proxy through the admin API or other surfaces including exfiltration of private key material allowing another agent to impersonate the workload It should be assumed that a compromised workload also includes a compromise of the sidecar proxy Given this a compromised workload may Send arbitrary traffic with or without mutual TLS These may bypass any proxy configuration or even the proxy entirely Note that Istio does not offer egress based authorization policies so there is no egress authorization policy bypass occurring Accept traffic that was already destined to the application It may bypass policies that were configured in the sidecar proxy The key takeaway here is that while the compromised workload may behave maliciously this does not give them the ability to bypass policies in other workloads In ambient mode the node proxy is not co located within the pod and runs in another trust boundary as part of an independent pod A compromised application may send arbitrary traffic However they do not have control over the node proxy which will chose how to handle incoming and outbound traffic Additionally as the pod itself doesn t have access to a service account token to request a mutual TLS certificate lateral movement possibilities are reduced Istio offers a variety of features that can limit the impact of such a compromise Observability docs tasks observability features can be used to identify the attack Policies docs tasks security authorization can be used to restrict what type of traffic a workload can send or receive Proxy compromise Sidecars In this scenario a sidecar proxy is compromised Because the sidecar and application reside in the same trust domain this is functionally equivalent to the Workload compromise workload compromise Proxy compromise Waypoint In this scenario a waypoint proxy gateways and waypoints is compromised While waypoints do not have any privileges for a hacker to exploit they do serve potentially many different services and workloads A compromised waypoint will receive all traffic for these which it can view modify or drop Istio offers the flexibility of configuring the granularity of a waypoint deployment docs ambient usage waypoint useawaypoint Users may consider deploying more isolated waypoints if they require stronger isolation Because waypoints run with a distinct identity from the applications they serve a compromised waypoint does not imply the user s applications can be impersonated Proxy compromise Ztunnel In this scenario a ztunnel ztunnel proxy is compromised A compromised ztunnel gives the attacker control of the networking of the node Ztunnel has access to private key material for each application running on it s node A compromised ztunnel could have these exfiltrated and used elsewhere However lateral movement to identities beyond co located workloads is not possible each ztunnel is only authorized to access certificates for workloads running on its node scoping the blast radius of a compromised ztunnel Node compromise In this scenario the Kubernetes Node is compromised Both Kubernetes https kubernetes io docs reference access authn authz node and Istio are designed to limit the blast radius of a single node compromise such that the compromise of a single node does not lead to a cluster wide compromise cluster api server compromise However the attack does have complete control over any workloads running on that node For instance it can compromise any co located waypoints proxy compromise waypoint the local ztunnel proxy compromise ztunnel any sidecars proxy compromise sidecars any co located Istiod instances istiod compromise etc Cluster API Server compromise A compromise of the Kubernetes API Server effectively means the entire cluster and mesh are compromised Unlike most other attack vectors there isn t much Istio can do to control the blast radius of such an attack A compromised API Server gives a hacker complete control over the cluster including actions such as running kubectl exec on arbitrary pods removing any Istio AuthorizationPolicies or even uninstalling Istio entirely Istiod compromise A compromise of Istiod generally leads to the same result as an API Server compromise cluster api server compromise Istiod is a highly privileged component that should be strongly protected Following the security best practices docs ops best practices security is crucial to maintaining a secure cluster |
istio help ops traffic management troubleshooting owner istio wg networking maintainers aliases weight 10 docs ops troubleshooting network issues title Traffic Management Problems forceinlinetoc true help ops troubleshooting network issues Techniques to address common Istio traffic management and network problems | ---
title: Traffic Management Problems
description: Techniques to address common Istio traffic management and network problems.
force_inline_toc: true
weight: 10
aliases:
- /help/ops/traffic-management/troubleshooting
- /help/ops/troubleshooting/network-issues
- /docs/ops/troubleshooting/network-issues
owner: istio/wg-networking-maintainers
test: n/a
---
## Requests are rejected by Envoy
Requests may be rejected for various reasons. The best way to understand why requests are being rejected is
by inspecting Envoy's access logs. By default, access logs are output to the standard output of the container.
Run the following command to see the log:
$ kubectl logs PODNAME -c istio-proxy -n NAMESPACE
In the default access log format, Envoy response flags are located after the response code,
if you are using a custom log format, make sure to include `%RESPONSE_FLAGS%`.
Refer to the [Envoy response flags](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#config-access-log-format-response-flags)
for details of response flags.
Common response flags are:
- `NR`: No route configured, check your `DestinationRule` or `VirtualService`.
- `UO`: Upstream overflow with circuit breaking, check your circuit breaker configuration in `DestinationRule`.
- `UF`: Failed to connect to upstream, if you're using Istio authentication, check for a
[mutual TLS configuration conflict](#503-errors-after-setting-destination-rule).
## Route rules don't seem to affect traffic flow
With the current Envoy sidecar implementation, up to 100 requests may be required for weighted
version distribution to be observed.
If route rules are working perfectly for the [Bookinfo](/docs/examples/bookinfo/) sample,
but similar version routing rules have no effect on your own application, it may be that
your Kubernetes services need to be changed slightly.
Kubernetes services must adhere to certain restrictions in order to take advantage of
Istio's L7 routing features.
Refer to the [Requirements for Pods and Services](/docs/ops/deployment/application-requirements/)
for details.
Another potential issue is that the route rules may simply be slow to take effect.
The Istio implementation on Kubernetes utilizes an eventually consistent
algorithm to ensure all Envoy sidecars have the correct configuration
including all route rules. A configuration change will take some time
to propagate to all the sidecars. With large deployments the
propagation will take longer and there may be a lag time on the
order of seconds.
## 503 errors after setting destination rule
You should only see this error if you disabled [automatic mutual TLS](/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls) during install.
If requests to a service immediately start generating HTTP 503 errors after you applied a `DestinationRule`
and the errors continue until you remove or revert the `DestinationRule`, then the `DestinationRule` is probably
causing a TLS conflict for the service.
For example, if you configure mutual TLS in the cluster globally, the `DestinationRule` must include the following `trafficPolicy`:
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
Otherwise, the mode defaults to `DISABLE` causing client proxy sidecars to make plain HTTP requests
instead of TLS encrypted requests. Thus, the requests conflict with the server proxy because the server proxy expects
encrypted requests.
Whenever you apply a `DestinationRule`, ensure the `trafficPolicy` TLS mode matches the global server configuration.
## Route rules have no effect on ingress gateway requests
Let's assume you are using an ingress `Gateway` and corresponding `VirtualService` to access an internal service.
For example, your `VirtualService` looks something like this:
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- "myapp.com" # or maybe "*" if you are testing without DNS using the ingress-gateway IP (e.g., http://1.2.3.4/hello)
gateways:
- myapp-gateway
http:
- match:
- uri:
prefix: /hello
route:
- destination:
host: helloworld.default.svc.cluster.local
- match:
...
You also have a `VirtualService` which routes traffic for the helloworld service to a particular subset:
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- helloworld.default.svc.cluster.local
http:
- route:
- destination:
host: helloworld.default.svc.cluster.local
subset: v1
In this situation you will notice that requests to the helloworld service via the ingress gateway will
not be directed to subset v1 but instead will continue to use default round-robin routing.
The ingress requests are using the gateway host (e.g., `myapp.com`)
which will activate the rules in the myapp `VirtualService` that routes to any endpoint of the helloworld service.
Only internal requests with the host `helloworld.default.svc.cluster.local` will use the
helloworld `VirtualService` which directs traffic exclusively to subset v1.
To control the traffic from the gateway, you need to also include the subset rule in the myapp `VirtualService`:
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- "myapp.com" # or maybe "*" if you are testing without DNS using the ingress-gateway IP (e.g., http://1.2.3.4/hello)
gateways:
- myapp-gateway
http:
- match:
- uri:
prefix: /hello
route:
- destination:
host: helloworld.default.svc.cluster.local
subset: v1
- match:
...
Alternatively, you can combine both `VirtualServices` into one unit if possible:
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- myapp.com # cannot use "*" here since this is being combined with the mesh services
- helloworld.default.svc.cluster.local
gateways:
- mesh # applies internally as well as externally
- myapp-gateway
http:
- match:
- uri:
prefix: /hello
gateways:
- myapp-gateway #restricts this rule to apply only to ingress gateway
route:
- destination:
host: helloworld.default.svc.cluster.local
subset: v1
- match:
- gateways:
- mesh # applies to all services inside the mesh
route:
- destination:
host: helloworld.default.svc.cluster.local
subset: v1
## Envoy is crashing under load
Check your `ulimit -a`. Many systems have a 1024 open file descriptor limit by default which will cause Envoy to assert and crash with:
[2017-05-17 03:00:52.735][14236][critical][assert] assert failure: fd_ != -1: external/envoy/source/common/network/connection_impl.cc:58
Make sure to raise your ulimit. Example: `ulimit -n 16384`
## Envoy won't connect to my HTTP/1.0 service
Envoy requires `HTTP/1.1` or `HTTP/2` traffic for upstream services. For example, when using [NGINX](https://www.nginx.com/) for serving traffic behind Envoy, you
will need to set the [proxy_http_version](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version) directive in your NGINX configuration to be "1.1", since the NGINX default is 1.0.
Example configuration:
upstream http_backend {
server 127.0.0.1:8080;
keepalive 16;
}
server {
...
location /http/ {
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}
## 503 error while accessing headless services
Assume Istio is installed with the following configuration:
- `mTLS mode` set to `STRICT` within the mesh
- `meshConfig.outboundTrafficPolicy.mode` set to `ALLOW_ANY`
Consider `nginx` is deployed as a `StatefulSet` in the default namespace and a corresponding `Headless Service` is defined as shown below:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: http-web # Explicitly defining an http port
clusterIP: None # Creates a Headless Service
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.k8s.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
The port name `http-web` in the Service definition explicitly specifies the http protocol for that port.
Let us assume we have a [curl](/samples/curl) pod `Deployment` as well in the default namespace.
When `nginx` is accessed from this `curl` pod using its Pod IP (this is one of the common ways to access a headless service), the request goes via the `PassthroughCluster` to the server-side, but the sidecar proxy on the server-side fails to find the route entry to `nginx` and fails with `HTTP 503 UC`.
$ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c curl -- curl 10.1.1.171 -s -o /dev/null -w "%{http_code}"
503
`10.1.1.171` is the Pod IP of one of the replicas of `nginx` and the service is accessed on `containerPort` 80.
Here are some of the ways to avoid this 503 error:
1. Specify the correct Host header:
The Host header in the curl request above will be the Pod IP by default. Specifying the Host header as `nginx.default` in our request to `nginx` successfully returns `HTTP 200 OK`.
$ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c curl -- curl -H "Host: nginx.default" 10.1.1.171 -s -o /dev/null -w "%{http_code}"
200
1. Set port name to `tcp` or `tcp-web` or `tcp-<custom_name>`:
Here the protocol is explicitly specified as `tcp`. In this case, only the `TCP Proxy` network filter on the sidecar proxy is used both on the client-side and server-side. HTTP Connection Manager is not used at all and therefore, any kind of header is not expected in the request.
A request to `nginx` with or without explicitly setting the Host header successfully returns `HTTP 200 OK`.
This is useful in certain scenarios where a client may not be able to include header information in the request.
$ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c curl -- curl 10.1.1.171 -s -o /dev/null -w "%{http_code}"
200
$ kubectl exec -it $SOURCE_POD -c curl -- curl -H "Host: nginx.default" 10.1.1.171 -s -o /dev/null -w "%{http_code}"
200
1. Use domain name instead of Pod IP:
A specific instance of a headless service can also be accessed using just the domain name.
$ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c curl -- curl web-0.nginx.default -s -o /dev/null -w "%{http_code}"
200
Here `web-0` is the pod name of one of the 3 replicas of `nginx`.
Refer to this [traffic routing](/docs/ops/configuration/traffic-management/traffic-routing/) page for some additional information on headless services and traffic routing behavior for different protocols.
## TLS configuration mistakes
Many traffic management problems
are caused by incorrect [TLS configuration](/docs/ops/configuration/traffic-management/tls-configuration/).
The following sections describe some of the most common misconfigurations.
### Sending HTTPS to an HTTP port
If your application sends an HTTPS request to a service declared to be HTTP,
the Envoy sidecar will attempt to parse the request as HTTP while forwarding the request,
which will fail because the HTTP is unexpectedly encrypted.
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
name: httpbin
spec:
hosts:
- httpbin.org
ports:
- number: 443
name: http
protocol: HTTP
resolution: DNS
Although the above configuration may be correct if you are intentionally sending plaintext on port 443 (e.g., `curl http://httpbin.org:443`),
generally port 443 is dedicated for HTTPS traffic.
Sending an HTTPS request like `curl https://httpbin.org`, which defaults to port 443, will result in an error like
`curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number`.
The access logs may also show an error like `400 DPE`.
To fix this, you should change the port protocol to HTTPS:
spec:
ports:
- number: 443
name: https
protocol: HTTPS
### Gateway to virtual service TLS mismatch {#gateway-mismatch}
There are two common TLS mismatches that can occur when binding a virtual service to a gateway.
1. The gateway terminates TLS while the virtual service configures TLS routing.
1. The gateway does TLS passthrough while the virtual service configures HTTP routing.
#### Gateway with TLS termination
apiVersion: networking.istio.io/v1
kind: Gateway
metadata:
name: gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: sds-credential
---
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- "*.example.com"
gateways:
- istio-system/gateway
tls:
- match:
- sniHosts:
- "*.example.com"
route:
- destination:
host: httpbin.org
In this example, the gateway is terminating TLS (the `tls.mode` configuration of the gateway is `SIMPLE`,
not `PASSTHROUGH`) while the virtual service is using TLS-based routing. Evaluating routing rules
occurs after the gateway terminates TLS, so the TLS rule will have no effect because the
request is then HTTP rather than HTTPS.
With this misconfiguration, you will end up getting 404 responses because the requests will be
sent to HTTP routing but there are no HTTP routes configured.
You can confirm this using the `istioctl proxy-config routes` command.
To fix this problem, you should switch the virtual service to specify `http` routing, instead of `tls`:
spec:
...
http:
- match:
- headers:
":authority":
regex: "*.example.com"
#### Gateway with TLS passthrough
apiVersion: networking.istio.io/v1
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "*"
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: virtual-service
spec:
gateways:
- gateway
hosts:
- httpbin.example.com
http:
- route:
- destination:
host: httpbin.org
In this configuration, the virtual service is attempting to match HTTP traffic against TLS traffic passed through the gateway.
This will result in the virtual service configuration having no effect. You can observe that the HTTP route is not applied using
the `istioctl proxy-config listener` and `istioctl proxy-config route` commands.
To fix this, you should switch the virtual service to configure `tls` routing:
spec:
tls:
- match:
- sniHosts: ["httpbin.example.com"]
route:
- destination:
host: httpbin.org
Alternatively, you could terminate TLS, rather than passing it through, by switching the `tls` configuration in the gateway:
spec:
...
tls:
credentialName: sds-credential
mode: SIMPLE
### Double TLS (TLS origination for a TLS request) {#double-tls}
When configuring Istio to perform TLS origination, you need to make sure
that the application sends plaintext requests to the sidecar, which will then originate the TLS.
The following `DestinationRule` originates TLS for requests to the `httpbin.org` service,
but the corresponding `ServiceEntry` defines the protocol as HTTPS on port 443.
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
name: httpbin
spec:
hosts:
- httpbin.org
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
---
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: originate-tls
spec:
host: httpbin.org
trafficPolicy:
tls:
mode: SIMPLE
With this configuration, the sidecar expects the application to send TLS traffic on port 443
(e.g., `curl https://httpbin.org`), but it will also perform TLS origination before forwarding requests.
This will cause the requests to be double encrypted.
For example, sending a request like `curl https://httpbin.org` will result in an error:
`(35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number`.
You can fix this example by changing the port protocol in the `ServiceEntry` to HTTP:
spec:
hosts:
- httpbin.org
ports:
- number: 443
name: http
protocol: HTTP
Note that with this configuration your application will need to send plaintext requests to port 443,
like `curl http://httpbin.org:443`, because TLS origination does not change the port.
However, starting in Istio 1.8, you can expose HTTP port 80 to the application (e.g., `curl http://httpbin.org`)
and then redirect requests to `targetPort` 443 for the TLS origination:
spec:
hosts:
- httpbin.org
ports:
- number: 80
name: http
protocol: HTTP
targetPort: 443
### 404 errors occur when multiple gateways configured with same TLS certificate
Configuring more than one gateway using the same TLS certificate will cause browsers
that leverage [HTTP/2 connection reuse](https://httpwg.org/specs/rfc7540.html#reuse)
(i.e., most browsers) to produce 404 errors when accessing a second host after a
connection to another host has already been established.
For example, let's say you have 2 hosts that share the same TLS certificate like this:
- Wildcard certificate `*.test.com` installed in `istio-ingressgateway`
- `Gateway` configuration `gw1` with host `service1.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate
- `Gateway` configuration `gw2` with host `service2.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate
- `VirtualService` configuration `vs1` with host `service1.test.com` and gateway `gw1`
- `VirtualService` configuration `vs2` with host `service2.test.com` and gateway `gw2`
Since both gateways are served by the same workload (i.e., selector `istio: ingressgateway`) requests to both services
(`service1.test.com` and `service2.test.com`) will resolve to the same IP. If `service1.test.com` is accessed first, it
will return the wildcard certificate (`*.test.com`) indicating that connections to `service2.test.com` can use the same certificate.
Browsers like Chrome and Firefox will consequently reuse the existing connection for requests to `service2.test.com`.
Since the gateway (`gw1`) has no route for `service2.test.com`, it will then return a 404 (Not Found) response.
You can avoid this problem by configuring a single wildcard `Gateway`, instead of two (`gw1` and `gw2`).
Then, simply bind both `VirtualServices` to it like this:
- `Gateway` configuration `gw` with host `*.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate
- `VirtualService` configuration `vs1` with host `service1.test.com` and gateway `gw`
- `VirtualService` configuration `vs2` with host `service2.test.com` and gateway `gw`
### Configuring SNI routing when not sending SNI
An HTTPS `Gateway` that specifies the `hosts` field will perform an [SNI](https://en.wikipedia.org/wiki/Server_Name_Indication) match on incoming requests.
For example, the following configuration would only allow requests that match `*.example.com` in the SNI:
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*.example.com"
This may cause certain requests to fail.
For example, if you do not have DNS set up and are instead directly setting the host header, such as `curl 1.2.3.4 -H "Host: app.example.com"`, no SNI will be set, causing the request to fail.
Instead, you can set up DNS or use the `--resolve` flag of `curl`. See the [Secure Gateways](/docs/tasks/traffic-management/ingress/secure-ingress/) task for more information.
Another common issue is load balancers in front of Istio.
Most cloud load balancers will not forward the SNI, so if you are terminating TLS in your cloud load balancer you may need to do one of the following:
- Configure the cloud load balancer to instead passthrough the TLS connection
- Disable SNI matching in the `Gateway` by setting the hosts field to `*`
A common symptom of this is for the load balancer health checks to succeed while real traffic fails.
## Unchanged Envoy filter configuration suddenly stops working
An `EnvoyFilter` configuration that specifies an insert position relative to another filter can be very
fragile because, by default, the order of evaluation is based on the creation time of the filters.
Consider a filter with the following specification:
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: SIDECAR_OUTBOUND
listener:
portNumber: 443
filterChain:
filter:
name: istio.stats
patch:
operation: INSERT_BEFORE
value:
...
To work properly, this filter configuration depends on the `istio.stats` filter having an older creation time
than it. Otherwise, the `INSERT_BEFORE` operation will be silently ignored. There will be nothing in the
error log to indicate that this filter has not been added to the chain.
This is particularly problematic when matching filters, like `istio.stats`, that are version
specific (i.e., that include the `proxyVersion` field in their match criteria). Such filters may be removed
or replaced by newer ones when upgrading Istio. As a result, an `EnvoyFilter` like the one above may initially
be working perfectly but after upgrading Istio to a newer version it will no longer be included in the network
filter chain of the sidecars.
To avoid this issue, you can either change the operation to one that does not depend on the presence of
another filter (e.g., `INSERT_FIRST`), or set an explicit priority in the `EnvoyFilter` to override the
default creation time-based ordering. For example, adding `priority: 10` to the above filter will ensure
that it is processed after the `istio.stats` filter which has a default priority of 0.
## Virtual service with fault injection and retry/timeout policies not working as expected
Currently, Istio does not support configuring fault injections and retry or timeout policies on the
same `VirtualService`. Consider the following configuration:
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
fault:
abort:
httpStatus: 500
percentage:
value: 50
retries:
attempts: 5
retryOn: 5xx
route:
- destination:
host: helloworld
port:
number: 5000
You would expect that given the configured five retry attempts, the user would almost never see any
errors when calling the `helloworld` service. However since both fault and retries are configured on
the same `VirtualService`, the retry configuration does not take effect, resulting in a 50% failure
rate. To work around this issue, you may remove the fault config from your `VirtualService` and
inject the fault to the upstream Envoy proxy using `EnvoyFilter` instead:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: hello-world-filter
spec:
workloadSelector:
labels:
app: helloworld
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND # will match outbound listeners in all sidecars
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
patch:
operation: INSERT_BEFORE
value:
name: envoy.fault
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.http.fault.v3.HTTPFault"
abort:
http_status: 500
percentage:
numerator: 50
denominator: HUNDRED
This works because this way the retry policy is configured for the client proxy while the fault
injection is configured for the upstream proxy. | istio | title Traffic Management Problems description Techniques to address common Istio traffic management and network problems force inline toc true weight 10 aliases help ops traffic management troubleshooting help ops troubleshooting network issues docs ops troubleshooting network issues owner istio wg networking maintainers test n a Requests are rejected by Envoy Requests may be rejected for various reasons The best way to understand why requests are being rejected is by inspecting Envoy s access logs By default access logs are output to the standard output of the container Run the following command to see the log kubectl logs PODNAME c istio proxy n NAMESPACE In the default access log format Envoy response flags are located after the response code if you are using a custom log format make sure to include RESPONSE FLAGS Refer to the Envoy response flags https www envoyproxy io docs envoy latest configuration observability access log usage config access log format response flags for details of response flags Common response flags are NR No route configured check your DestinationRule or VirtualService UO Upstream overflow with circuit breaking check your circuit breaker configuration in DestinationRule UF Failed to connect to upstream if you re using Istio authentication check for a mutual TLS configuration conflict 503 errors after setting destination rule Route rules don t seem to affect traffic flow With the current Envoy sidecar implementation up to 100 requests may be required for weighted version distribution to be observed If route rules are working perfectly for the Bookinfo docs examples bookinfo sample but similar version routing rules have no effect on your own application it may be that your Kubernetes services need to be changed slightly Kubernetes services must adhere to certain restrictions in order to take advantage of Istio s L7 routing features Refer to the Requirements for Pods and Services docs ops deployment application requirements for details Another potential issue is that the route rules may simply be slow to take effect The Istio implementation on Kubernetes utilizes an eventually consistent algorithm to ensure all Envoy sidecars have the correct configuration including all route rules A configuration change will take some time to propagate to all the sidecars With large deployments the propagation will take longer and there may be a lag time on the order of seconds 503 errors after setting destination rule You should only see this error if you disabled automatic mutual TLS docs tasks security authentication authn policy auto mutual tls during install If requests to a service immediately start generating HTTP 503 errors after you applied a DestinationRule and the errors continue until you remove or revert the DestinationRule then the DestinationRule is probably causing a TLS conflict for the service For example if you configure mutual TLS in the cluster globally the DestinationRule must include the following trafficPolicy trafficPolicy tls mode ISTIO MUTUAL Otherwise the mode defaults to DISABLE causing client proxy sidecars to make plain HTTP requests instead of TLS encrypted requests Thus the requests conflict with the server proxy because the server proxy expects encrypted requests Whenever you apply a DestinationRule ensure the trafficPolicy TLS mode matches the global server configuration Route rules have no effect on ingress gateway requests Let s assume you are using an ingress Gateway and corresponding VirtualService to access an internal service For example your VirtualService looks something like this apiVersion networking istio io v1 kind VirtualService metadata name myapp spec hosts myapp com or maybe if you are testing without DNS using the ingress gateway IP e g http 1 2 3 4 hello gateways myapp gateway http match uri prefix hello route destination host helloworld default svc cluster local match You also have a VirtualService which routes traffic for the helloworld service to a particular subset apiVersion networking istio io v1 kind VirtualService metadata name helloworld spec hosts helloworld default svc cluster local http route destination host helloworld default svc cluster local subset v1 In this situation you will notice that requests to the helloworld service via the ingress gateway will not be directed to subset v1 but instead will continue to use default round robin routing The ingress requests are using the gateway host e g myapp com which will activate the rules in the myapp VirtualService that routes to any endpoint of the helloworld service Only internal requests with the host helloworld default svc cluster local will use the helloworld VirtualService which directs traffic exclusively to subset v1 To control the traffic from the gateway you need to also include the subset rule in the myapp VirtualService apiVersion networking istio io v1 kind VirtualService metadata name myapp spec hosts myapp com or maybe if you are testing without DNS using the ingress gateway IP e g http 1 2 3 4 hello gateways myapp gateway http match uri prefix hello route destination host helloworld default svc cluster local subset v1 match Alternatively you can combine both VirtualServices into one unit if possible apiVersion networking istio io v1 kind VirtualService metadata name myapp spec hosts myapp com cannot use here since this is being combined with the mesh services helloworld default svc cluster local gateways mesh applies internally as well as externally myapp gateway http match uri prefix hello gateways myapp gateway restricts this rule to apply only to ingress gateway route destination host helloworld default svc cluster local subset v1 match gateways mesh applies to all services inside the mesh route destination host helloworld default svc cluster local subset v1 Envoy is crashing under load Check your ulimit a Many systems have a 1024 open file descriptor limit by default which will cause Envoy to assert and crash with 2017 05 17 03 00 52 735 14236 critical assert assert failure fd 1 external envoy source common network connection impl cc 58 Make sure to raise your ulimit Example ulimit n 16384 Envoy won t connect to my HTTP 1 0 service Envoy requires HTTP 1 1 or HTTP 2 traffic for upstream services For example when using NGINX https www nginx com for serving traffic behind Envoy you will need to set the proxy http version https nginx org en docs http ngx http proxy module html proxy http version directive in your NGINX configuration to be 1 1 since the NGINX default is 1 0 Example configuration upstream http backend server 127 0 0 1 8080 keepalive 16 server location http proxy pass http http backend proxy http version 1 1 proxy set header Connection 503 error while accessing headless services Assume Istio is installed with the following configuration mTLS mode set to STRICT within the mesh meshConfig outboundTrafficPolicy mode set to ALLOW ANY Consider nginx is deployed as a StatefulSet in the default namespace and a corresponding Headless Service is defined as shown below apiVersion v1 kind Service metadata name nginx labels app nginx spec ports port 80 name http web Explicitly defining an http port clusterIP None Creates a Headless Service selector app nginx apiVersion apps v1 kind StatefulSet metadata name web spec selector matchLabels app nginx serviceName nginx replicas 3 template metadata labels app nginx spec containers name nginx image registry k8s io nginx slim 0 8 ports containerPort 80 name web The port name http web in the Service definition explicitly specifies the http protocol for that port Let us assume we have a curl samples curl pod Deployment as well in the default namespace When nginx is accessed from this curl pod using its Pod IP this is one of the common ways to access a headless service the request goes via the PassthroughCluster to the server side but the sidecar proxy on the server side fails to find the route entry to nginx and fails with HTTP 503 UC export SOURCE POD kubectl get pod l app curl o jsonpath items metadata name kubectl exec it SOURCE POD c curl curl 10 1 1 171 s o dev null w http code 503 10 1 1 171 is the Pod IP of one of the replicas of nginx and the service is accessed on containerPort 80 Here are some of the ways to avoid this 503 error 1 Specify the correct Host header The Host header in the curl request above will be the Pod IP by default Specifying the Host header as nginx default in our request to nginx successfully returns HTTP 200 OK export SOURCE POD kubectl get pod l app curl o jsonpath items metadata name kubectl exec it SOURCE POD c curl curl H Host nginx default 10 1 1 171 s o dev null w http code 200 1 Set port name to tcp or tcp web or tcp custom name Here the protocol is explicitly specified as tcp In this case only the TCP Proxy network filter on the sidecar proxy is used both on the client side and server side HTTP Connection Manager is not used at all and therefore any kind of header is not expected in the request A request to nginx with or without explicitly setting the Host header successfully returns HTTP 200 OK This is useful in certain scenarios where a client may not be able to include header information in the request export SOURCE POD kubectl get pod l app curl o jsonpath items metadata name kubectl exec it SOURCE POD c curl curl 10 1 1 171 s o dev null w http code 200 kubectl exec it SOURCE POD c curl curl H Host nginx default 10 1 1 171 s o dev null w http code 200 1 Use domain name instead of Pod IP A specific instance of a headless service can also be accessed using just the domain name export SOURCE POD kubectl get pod l app curl o jsonpath items metadata name kubectl exec it SOURCE POD c curl curl web 0 nginx default s o dev null w http code 200 Here web 0 is the pod name of one of the 3 replicas of nginx Refer to this traffic routing docs ops configuration traffic management traffic routing page for some additional information on headless services and traffic routing behavior for different protocols TLS configuration mistakes Many traffic management problems are caused by incorrect TLS configuration docs ops configuration traffic management tls configuration The following sections describe some of the most common misconfigurations Sending HTTPS to an HTTP port If your application sends an HTTPS request to a service declared to be HTTP the Envoy sidecar will attempt to parse the request as HTTP while forwarding the request which will fail because the HTTP is unexpectedly encrypted apiVersion networking istio io v1 kind ServiceEntry metadata name httpbin spec hosts httpbin org ports number 443 name http protocol HTTP resolution DNS Although the above configuration may be correct if you are intentionally sending plaintext on port 443 e g curl http httpbin org 443 generally port 443 is dedicated for HTTPS traffic Sending an HTTPS request like curl https httpbin org which defaults to port 443 will result in an error like curl 35 error 1408F10B SSL routines ssl3 get record wrong version number The access logs may also show an error like 400 DPE To fix this you should change the port protocol to HTTPS spec ports number 443 name https protocol HTTPS Gateway to virtual service TLS mismatch gateway mismatch There are two common TLS mismatches that can occur when binding a virtual service to a gateway 1 The gateway terminates TLS while the virtual service configures TLS routing 1 The gateway does TLS passthrough while the virtual service configures HTTP routing Gateway with TLS termination apiVersion networking istio io v1 kind Gateway metadata name gateway namespace istio system spec selector istio ingressgateway servers port number 443 name https protocol HTTPS hosts tls mode SIMPLE credentialName sds credential apiVersion networking istio io v1 kind VirtualService metadata name httpbin spec hosts example com gateways istio system gateway tls match sniHosts example com route destination host httpbin org In this example the gateway is terminating TLS the tls mode configuration of the gateway is SIMPLE not PASSTHROUGH while the virtual service is using TLS based routing Evaluating routing rules occurs after the gateway terminates TLS so the TLS rule will have no effect because the request is then HTTP rather than HTTPS With this misconfiguration you will end up getting 404 responses because the requests will be sent to HTTP routing but there are no HTTP routes configured You can confirm this using the istioctl proxy config routes command To fix this problem you should switch the virtual service to specify http routing instead of tls spec http match headers authority regex example com Gateway with TLS passthrough apiVersion networking istio io v1 kind Gateway metadata name gateway spec selector istio ingressgateway servers hosts port name https number 443 protocol HTTPS tls mode PASSTHROUGH apiVersion networking istio io v1 kind VirtualService metadata name virtual service spec gateways gateway hosts httpbin example com http route destination host httpbin org In this configuration the virtual service is attempting to match HTTP traffic against TLS traffic passed through the gateway This will result in the virtual service configuration having no effect You can observe that the HTTP route is not applied using the istioctl proxy config listener and istioctl proxy config route commands To fix this you should switch the virtual service to configure tls routing spec tls match sniHosts httpbin example com route destination host httpbin org Alternatively you could terminate TLS rather than passing it through by switching the tls configuration in the gateway spec tls credentialName sds credential mode SIMPLE Double TLS TLS origination for a TLS request double tls When configuring Istio to perform TLS origination you need to make sure that the application sends plaintext requests to the sidecar which will then originate the TLS The following DestinationRule originates TLS for requests to the httpbin org service but the corresponding ServiceEntry defines the protocol as HTTPS on port 443 apiVersion networking istio io v1 kind ServiceEntry metadata name httpbin spec hosts httpbin org ports number 443 name https protocol HTTPS resolution DNS apiVersion networking istio io v1 kind DestinationRule metadata name originate tls spec host httpbin org trafficPolicy tls mode SIMPLE With this configuration the sidecar expects the application to send TLS traffic on port 443 e g curl https httpbin org but it will also perform TLS origination before forwarding requests This will cause the requests to be double encrypted For example sending a request like curl https httpbin org will result in an error 35 error 1408F10B SSL routines ssl3 get record wrong version number You can fix this example by changing the port protocol in the ServiceEntry to HTTP spec hosts httpbin org ports number 443 name http protocol HTTP Note that with this configuration your application will need to send plaintext requests to port 443 like curl http httpbin org 443 because TLS origination does not change the port However starting in Istio 1 8 you can expose HTTP port 80 to the application e g curl http httpbin org and then redirect requests to targetPort 443 for the TLS origination spec hosts httpbin org ports number 80 name http protocol HTTP targetPort 443 404 errors occur when multiple gateways configured with same TLS certificate Configuring more than one gateway using the same TLS certificate will cause browsers that leverage HTTP 2 connection reuse https httpwg org specs rfc7540 html reuse i e most browsers to produce 404 errors when accessing a second host after a connection to another host has already been established For example let s say you have 2 hosts that share the same TLS certificate like this Wildcard certificate test com installed in istio ingressgateway Gateway configuration gw1 with host service1 test com selector istio ingressgateway and TLS using gateway s mounted wildcard certificate Gateway configuration gw2 with host service2 test com selector istio ingressgateway and TLS using gateway s mounted wildcard certificate VirtualService configuration vs1 with host service1 test com and gateway gw1 VirtualService configuration vs2 with host service2 test com and gateway gw2 Since both gateways are served by the same workload i e selector istio ingressgateway requests to both services service1 test com and service2 test com will resolve to the same IP If service1 test com is accessed first it will return the wildcard certificate test com indicating that connections to service2 test com can use the same certificate Browsers like Chrome and Firefox will consequently reuse the existing connection for requests to service2 test com Since the gateway gw1 has no route for service2 test com it will then return a 404 Not Found response You can avoid this problem by configuring a single wildcard Gateway instead of two gw1 and gw2 Then simply bind both VirtualServices to it like this Gateway configuration gw with host test com selector istio ingressgateway and TLS using gateway s mounted wildcard certificate VirtualService configuration vs1 with host service1 test com and gateway gw VirtualService configuration vs2 with host service2 test com and gateway gw Configuring SNI routing when not sending SNI An HTTPS Gateway that specifies the hosts field will perform an SNI https en wikipedia org wiki Server Name Indication match on incoming requests For example the following configuration would only allow requests that match example com in the SNI servers port number 443 name https protocol HTTPS hosts example com This may cause certain requests to fail For example if you do not have DNS set up and are instead directly setting the host header such as curl 1 2 3 4 H Host app example com no SNI will be set causing the request to fail Instead you can set up DNS or use the resolve flag of curl See the Secure Gateways docs tasks traffic management ingress secure ingress task for more information Another common issue is load balancers in front of Istio Most cloud load balancers will not forward the SNI so if you are terminating TLS in your cloud load balancer you may need to do one of the following Configure the cloud load balancer to instead passthrough the TLS connection Disable SNI matching in the Gateway by setting the hosts field to A common symptom of this is for the load balancer health checks to succeed while real traffic fails Unchanged Envoy filter configuration suddenly stops working An EnvoyFilter configuration that specifies an insert position relative to another filter can be very fragile because by default the order of evaluation is based on the creation time of the filters Consider a filter with the following specification spec configPatches applyTo NETWORK FILTER match context SIDECAR OUTBOUND listener portNumber 443 filterChain filter name istio stats patch operation INSERT BEFORE value To work properly this filter configuration depends on the istio stats filter having an older creation time than it Otherwise the INSERT BEFORE operation will be silently ignored There will be nothing in the error log to indicate that this filter has not been added to the chain This is particularly problematic when matching filters like istio stats that are version specific i e that include the proxyVersion field in their match criteria Such filters may be removed or replaced by newer ones when upgrading Istio As a result an EnvoyFilter like the one above may initially be working perfectly but after upgrading Istio to a newer version it will no longer be included in the network filter chain of the sidecars To avoid this issue you can either change the operation to one that does not depend on the presence of another filter e g INSERT FIRST or set an explicit priority in the EnvoyFilter to override the default creation time based ordering For example adding priority 10 to the above filter will ensure that it is processed after the istio stats filter which has a default priority of 0 Virtual service with fault injection and retry timeout policies not working as expected Currently Istio does not support configuring fault injections and retry or timeout policies on the same VirtualService Consider the following configuration apiVersion networking istio io v1 kind VirtualService metadata name helloworld spec hosts gateways helloworld gateway http match uri exact hello fault abort httpStatus 500 percentage value 50 retries attempts 5 retryOn 5xx route destination host helloworld port number 5000 You would expect that given the configured five retry attempts the user would almost never see any errors when calling the helloworld service However since both fault and retries are configured on the same VirtualService the retry configuration does not take effect resulting in a 50 failure rate To work around this issue you may remove the fault config from your VirtualService and inject the fault to the upstream Envoy proxy using EnvoyFilter instead apiVersion networking istio io v1alpha3 kind EnvoyFilter metadata name hello world filter spec workloadSelector labels app helloworld configPatches applyTo HTTP FILTER match context SIDECAR INBOUND will match outbound listeners in all sidecars listener filterChain filter name envoy filters network http connection manager patch operation INSERT BEFORE value name envoy fault typed config type type googleapis com envoy extensions filters http fault v3 HTTPFault abort http status 500 percentage numerator 50 denominator HUNDRED This works because this way the retry policy is configured for the client proxy while the fault injection is configured for the upstream proxy |
istio aliases owner istio wg user experience maintainers weight 40 test n a forceinlinetoc true docs ops troubleshooting injection title Sidecar Injection Problems Resolve common problems with Istio s use of Kubernetes webhooks for automatic sidecar injection | ---
title: Sidecar Injection Problems
description: Resolve common problems with Istio's use of Kubernetes webhooks for automatic sidecar injection.
force_inline_toc: true
weight: 40
aliases:
- /docs/ops/troubleshooting/injection
owner: istio/wg-user-experience-maintainers
test: n/a
---
## The result of sidecar injection was not what I expected
This includes an injected sidecar when it wasn't expected and a lack
of injected sidecar when it was.
1. Ensure your pod is not in the `kube-system` or `kube-public` namespace.
Automatic sidecar injection will be ignored for pods in these namespaces.
1. Ensure your pod does not have `hostNetwork: true` in its pod spec.
Automatic sidecar injection will be ignored for pods that are on the host network.
The sidecar model assumes that the iptables changes required for Envoy to intercept
traffic are within the pod. For pods on the host network this assumption is violated,
and this can lead to routing failures at the host level.
1. Check the webhook's `namespaceSelector` to determine whether the
webhook is scoped to opt-in or opt-out for the target namespace.
The `namespaceSelector` for opt-in will look like the following:
$ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep "namespaceSelector:" -A5
namespaceSelector:
matchLabels:
istio-injection: enabled
rules:
- apiGroups:
- ""
The injection webhook will be invoked for pods created
in namespaces with the `istio-injection=enabled` label.
$ kubectl get namespace -L istio-injection
NAME STATUS AGE ISTIO-INJECTION
default Active 18d enabled
istio-system Active 3d
kube-public Active 18d
kube-system Active 18d
The `namespaceSelector` for opt-out will look like the following:
$ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep "namespaceSelector:" -A5
namespaceSelector:
matchExpressions:
- key: istio-injection
operator: NotIn
values:
- disabled
rules:
- apiGroups:
- ""
The injection webhook will be invoked for pods created in namespaces
without the `istio-injection=disabled` label.
$ kubectl get namespace -L istio-injection
NAME STATUS AGE ISTIO-INJECTION
default Active 18d
istio-system Active 3d disabled
kube-public Active 18d disabled
kube-system Active 18d disabled
Verify the application pod's namespace is labeled properly and (re) label accordingly, e.g.
$ kubectl label namespace istio-system istio-injection=disabled --overwrite
(repeat for all namespaces in which the injection webhook should be invoked for new pods)
$ kubectl label namespace default istio-injection=enabled --overwrite
1. Check default policy
Check the default injection policy in the `istio-sidecar-injector configmap`.
$ kubectl -n istio-system get configmap istio-sidecar-injector -o jsonpath='{.data.config}' | grep policy:
policy: enabled
Allowed policy values are `disabled` and `enabled`. The default policy
only applies if the webhook’s `namespaceSelector` matches the target
namespace. Unrecognized policy causes injection to be disabled completely.
1. Check the per-pod override annotation
The default policy can be overridden with the
`sidecar.istio.io/inject` label in the _pod template spec’s metadata_.
The deployment’s metadata is ignored. Label value
of `true` forces the sidecar to be injected while a value of
`false` forces the sidecar to _not_ be injected.
The following label overrides whatever the default `policy` was
to force the sidecar to be injected:
$ kubectl get deployment curl -o yaml | grep "sidecar.istio.io/inject:" -B4
template:
metadata:
labels:
app: curl
sidecar.istio.io/inject: "true"
## Pods cannot be created at all
Run `kubectl describe -n namespace deployment name` on the failing
pod's deployment. Failure to invoke the injection webhook will
typically be captured in the event log.
### x509 certificate related errors
Warning FailedCreate 3m (x17 over 8m) replicaset-controller Error creating: Internal error occurred: \
failed calling admission webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject: \
x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying \
to verify candidate authority certificate "Kubernetes.cluster.local")
`x509: certificate signed by unknown authority` errors are typically
caused by an empty `caBundle` in the webhook configuration.
Verify the `caBundle` in the `mutatingwebhookconfiguration` matches the
root certificate mounted in the `istiod` pod.
$ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml -o jsonpath='{.webhooks[0].clientConfig.caBundle}' | md5sum
4b95d2ba22ce8971c7c92084da31faf0 -
$ kubectl -n istio-system get configmap istio-ca-root-cert -o jsonpath='{.data.root-cert\.pem}' | base64 -w 0 | md5sum
4b95d2ba22ce8971c7c92084da31faf0 -
The CA certificate should match. If they do not, restart the
istiod pods.
$ kubectl -n istio-system patch deployment istiod \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
deployment.extensions "istiod" patched
### Errors in deployment status
When automatic sidecar injection is enabled for a pod, and the injection fails for any reason, the pod creation
will also fail. In such cases, you can check the deployment status of the pod to identify the error. The errors
will also appear in the events of the namespace associated with the deployment.
For example, if the `istiod` control plane pod was not running when you tried to deploy your pod, the events would show the following error:
$ kubectl get events -n curl
...
23m Normal SuccessfulCreate replicaset/curl-9454cc476 Created pod: curl-9454cc476-khp45
22m Warning FailedCreate replicaset/curl-9454cc476 Error creating: Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": failed to call webhook: Post "https://istiod.istio-system.svc:443/inject?timeout=10s": dial tcp 10.96.44.51:443: connect: connection refused
$ kubectl -n istio-system get pod -lapp=istiod
NAME READY STATUS RESTARTS AGE
istiod-7d46d8d9db-jz2mh 1/1 Running 0 2d
$ kubectl -n istio-system get endpoints istiod
NAME ENDPOINTS AGE
istiod 10.244.2.8:15012,10.244.2.8:15010,10.244.2.8:15017 + 1 more... 3h18m
If the istiod pod or endpoints aren't ready, check the pod logs and status
for any indication about why the webhook pod is failing to start and
serve traffic.
$ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o jsonpath='{.items[*].metadata.name}'); do \
kubectl -n istio-system logs ${pod} \
done
$ for pod in $(kubectl -n istio-system get pod -l app=istiod -o name); do \
kubectl -n istio-system describe ${pod}; \
done
$
## Automatic sidecar injection fails if the Kubernetes API server has proxy settings
When the Kubernetes API server includes proxy settings such as:
env:
- name: http_proxy
value: http://proxy-wsa.esl.foo.com:80
- name: https_proxy
value: http://proxy-wsa.esl.foo.com:80
- name: no_proxy
value: 127.0.0.1,localhost,dockerhub.foo.com,devhub-docker.foo.com,10.84.100.125,10.84.100.126,10.84.100.127
With these settings, Sidecar injection fails. The only related failure log can be found in `kube-apiserver` log:
W0227 21:51:03.156818 1 admission.go:257] Failed calling webhook, failing open sidecar-injector.istio.io: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject: Service Unavailable
Make sure both pod and service CIDRs are not proxied according to `*_proxy` variables. Check the `kube-apiserver` files and logs to verify the configuration and whether any requests are being proxied.
One workaround is to remove the proxy settings from the `kube-apiserver` manifest, another workaround is to include `istio-sidecar-injector.istio-system.svc` or `.svc` in the `no_proxy` value. Make sure that `kube-apiserver` is restarted after each workaround.
An [issue](https://github.com/kubernetes/kubeadm/issues/666) was filed with Kubernetes related to this and has since been closed.
[https://github.com/kubernetes/kubernetes/pull/58698#discussion_r163879443](https://github.com/kubernetes/kubernetes/pull/58698#discussion_r163879443)
## Limitations for using Tcpdump in pods
Tcpdump doesn't work in the sidecar pod - the container doesn't run as root. However any other container in the same pod will see all the packets, since the
network namespace is shared. `iptables` will also see the pod-wide configuration.
Communication between Envoy and the app happens on 127.0.0.1, and is not encrypted.
## Cluster is not scaled down automatically
Due to the fact that the sidecar container mounts a local storage volume, the
node autoscaler is unable to evict nodes with the injected pods. This is
a [known issue](https://github.com/kubernetes/autoscaler/issues/3947). The workaround is
to add a pod annotation `"cluster-autoscaler.kubernetes.io/safe-to-evict":
"true"` to the injected pods.
## Pod or containers start with network issues if istio-proxy is not ready
Many applications execute commands or checks during startup, which require network connectivity. This can cause application containers to hang or restart if the `istio-proxy` sidecar container is not ready.
To avoid this, set `holdApplicationUntilProxyStarts` to `true`. This causes the sidecar injector to inject the sidecar at the start of the pod’s container list, and configures it to block the start of all other containers until the proxy is ready.
This can be added as a global config option:
values.global.proxy.holdApplicationUntilProxyStarts: true
or as a pod annotation:
proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }'
| istio | title Sidecar Injection Problems description Resolve common problems with Istio s use of Kubernetes webhooks for automatic sidecar injection force inline toc true weight 40 aliases docs ops troubleshooting injection owner istio wg user experience maintainers test n a The result of sidecar injection was not what I expected This includes an injected sidecar when it wasn t expected and a lack of injected sidecar when it was 1 Ensure your pod is not in the kube system or kube public namespace Automatic sidecar injection will be ignored for pods in these namespaces 1 Ensure your pod does not have hostNetwork true in its pod spec Automatic sidecar injection will be ignored for pods that are on the host network The sidecar model assumes that the iptables changes required for Envoy to intercept traffic are within the pod For pods on the host network this assumption is violated and this can lead to routing failures at the host level 1 Check the webhook s namespaceSelector to determine whether the webhook is scoped to opt in or opt out for the target namespace The namespaceSelector for opt in will look like the following kubectl get mutatingwebhookconfiguration istio sidecar injector o yaml grep namespaceSelector A5 namespaceSelector matchLabels istio injection enabled rules apiGroups The injection webhook will be invoked for pods created in namespaces with the istio injection enabled label kubectl get namespace L istio injection NAME STATUS AGE ISTIO INJECTION default Active 18d enabled istio system Active 3d kube public Active 18d kube system Active 18d The namespaceSelector for opt out will look like the following kubectl get mutatingwebhookconfiguration istio sidecar injector o yaml grep namespaceSelector A5 namespaceSelector matchExpressions key istio injection operator NotIn values disabled rules apiGroups The injection webhook will be invoked for pods created in namespaces without the istio injection disabled label kubectl get namespace L istio injection NAME STATUS AGE ISTIO INJECTION default Active 18d istio system Active 3d disabled kube public Active 18d disabled kube system Active 18d disabled Verify the application pod s namespace is labeled properly and re label accordingly e g kubectl label namespace istio system istio injection disabled overwrite repeat for all namespaces in which the injection webhook should be invoked for new pods kubectl label namespace default istio injection enabled overwrite 1 Check default policy Check the default injection policy in the istio sidecar injector configmap kubectl n istio system get configmap istio sidecar injector o jsonpath data config grep policy policy enabled Allowed policy values are disabled and enabled The default policy only applies if the webhook s namespaceSelector matches the target namespace Unrecognized policy causes injection to be disabled completely 1 Check the per pod override annotation The default policy can be overridden with the sidecar istio io inject label in the pod template spec s metadata The deployment s metadata is ignored Label value of true forces the sidecar to be injected while a value of false forces the sidecar to not be injected The following label overrides whatever the default policy was to force the sidecar to be injected kubectl get deployment curl o yaml grep sidecar istio io inject B4 template metadata labels app curl sidecar istio io inject true Pods cannot be created at all Run kubectl describe n namespace deployment name on the failing pod s deployment Failure to invoke the injection webhook will typically be captured in the event log x509 certificate related errors Warning FailedCreate 3m x17 over 8m replicaset controller Error creating Internal error occurred failed calling admission webhook sidecar injector istio io Post https istiod istio system svc 443 inject x509 certificate signed by unknown authority possibly because of crypto rsa verification error while trying to verify candidate authority certificate Kubernetes cluster local x509 certificate signed by unknown authority errors are typically caused by an empty caBundle in the webhook configuration Verify the caBundle in the mutatingwebhookconfiguration matches the root certificate mounted in the istiod pod kubectl get mutatingwebhookconfiguration istio sidecar injector o yaml o jsonpath webhooks 0 clientConfig caBundle md5sum 4b95d2ba22ce8971c7c92084da31faf0 kubectl n istio system get configmap istio ca root cert o jsonpath data root cert pem base64 w 0 md5sum 4b95d2ba22ce8971c7c92084da31faf0 The CA certificate should match If they do not restart the istiod pods kubectl n istio system patch deployment istiod p spec template metadata labels date date s deployment extensions istiod patched Errors in deployment status When automatic sidecar injection is enabled for a pod and the injection fails for any reason the pod creation will also fail In such cases you can check the deployment status of the pod to identify the error The errors will also appear in the events of the namespace associated with the deployment For example if the istiod control plane pod was not running when you tried to deploy your pod the events would show the following error kubectl get events n curl 23m Normal SuccessfulCreate replicaset curl 9454cc476 Created pod curl 9454cc476 khp45 22m Warning FailedCreate replicaset curl 9454cc476 Error creating Internal error occurred failed calling webhook namespace sidecar injector istio io failed to call webhook Post https istiod istio system svc 443 inject timeout 10s dial tcp 10 96 44 51 443 connect connection refused kubectl n istio system get pod lapp istiod NAME READY STATUS RESTARTS AGE istiod 7d46d8d9db jz2mh 1 1 Running 0 2d kubectl n istio system get endpoints istiod NAME ENDPOINTS AGE istiod 10 244 2 8 15012 10 244 2 8 15010 10 244 2 8 15017 1 more 3h18m If the istiod pod or endpoints aren t ready check the pod logs and status for any indication about why the webhook pod is failing to start and serve traffic for pod in kubectl n istio system get pod lapp istiod o jsonpath items metadata name do kubectl n istio system logs pod done for pod in kubectl n istio system get pod l app istiod o name do kubectl n istio system describe pod done Automatic sidecar injection fails if the Kubernetes API server has proxy settings When the Kubernetes API server includes proxy settings such as env name http proxy value http proxy wsa esl foo com 80 name https proxy value http proxy wsa esl foo com 80 name no proxy value 127 0 0 1 localhost dockerhub foo com devhub docker foo com 10 84 100 125 10 84 100 126 10 84 100 127 With these settings Sidecar injection fails The only related failure log can be found in kube apiserver log W0227 21 51 03 156818 1 admission go 257 Failed calling webhook failing open sidecar injector istio io failed calling admission webhook sidecar injector istio io Post https istio sidecar injector istio system svc 443 inject Service Unavailable Make sure both pod and service CIDRs are not proxied according to proxy variables Check the kube apiserver files and logs to verify the configuration and whether any requests are being proxied One workaround is to remove the proxy settings from the kube apiserver manifest another workaround is to include istio sidecar injector istio system svc or svc in the no proxy value Make sure that kube apiserver is restarted after each workaround An issue https github com kubernetes kubeadm issues 666 was filed with Kubernetes related to this and has since been closed https github com kubernetes kubernetes pull 58698 discussion r163879443 https github com kubernetes kubernetes pull 58698 discussion r163879443 Limitations for using Tcpdump in pods Tcpdump doesn t work in the sidecar pod the container doesn t run as root However any other container in the same pod will see all the packets since the network namespace is shared iptables will also see the pod wide configuration Communication between Envoy and the app happens on 127 0 0 1 and is not encrypted Cluster is not scaled down automatically Due to the fact that the sidecar container mounts a local storage volume the node autoscaler is unable to evict nodes with the injected pods This is a known issue https github com kubernetes autoscaler issues 3947 The workaround is to add a pod annotation cluster autoscaler kubernetes io safe to evict true to the injected pods Pod or containers start with network issues if istio proxy is not ready Many applications execute commands or checks during startup which require network connectivity This can cause application containers to hang or restart if the istio proxy sidecar container is not ready To avoid this set holdApplicationUntilProxyStarts to true This causes the sidecar injector to inject the sidecar at the start of the pod s container list and configures it to block the start of all other containers until the proxy is ready This can be added as a global config option values global proxy holdApplicationUntilProxyStarts true or as a pod annotation proxy istio io config holdApplicationUntilProxyStarts true |
istio weight 20 keywords security citadel Techniques to address common Istio authentication authorization and general security related problems title Security Problems aliases help ops security repairing citadel docs ops troubleshooting repairing citadel help ops troubleshooting repairing citadel forceinlinetoc true | ---
title: Security Problems
description: Techniques to address common Istio authentication, authorization, and general security-related problems.
force_inline_toc: true
weight: 20
keywords: [security,citadel]
aliases:
- /help/ops/security/repairing-citadel
- /help/ops/troubleshooting/repairing-citadel
- /docs/ops/troubleshooting/repairing-citadel
owner: istio/wg-security-maintainers
test: n/a
---
## End-user authentication fails
With Istio, you can enable authentication for end users through [request authentication policies](/docs/tasks/security/authentication/authn-policy/#end-user-authentication). Follow these steps to troubleshoot the policy specification.
1. If `jwksUri` isn’t set, make sure the JWT issuer is of url format and `url + /.well-known/openid-configuration` can be opened in browser; for example, if the JWT issuer is `https://accounts.google.com`, make sure `https://accounts.google.com/.well-known/openid-configuration` is a valid url and can be opened in a browser.
apiVersion: security.istio.io/v1
kind: RequestAuthentication
metadata:
name: "example-3"
spec:
selector:
matchLabels:
app: httpbin
jwtRules:
- issuer: "[email protected]"
jwksUri: "/security/tools/jwt/samples/jwks.json"
1. If the JWT token is placed in the Authorization header in http requests, make sure the JWT token is valid (not expired, etc). The fields in a JWT token can be decoded by using online JWT parsing tools, e.g., [jwt.io](https://jwt.io/).
1. Verify the Envoy proxy configuration of the target workload using `istioctl proxy-config` command.
With the example policy above applied, use the following command to check the `listener` configuration on the inbound port `80`. You should see `envoy.filters.http.jwt_authn` filter with settings matching the issuer and JWKS as specified in the policy.
$ POD=$(kubectl get pod -l app=httpbin -n foo -o jsonpath={.items..metadata.name})
$ istioctl proxy-config listener ${POD} -n foo --port 80 --type HTTP -o json
<redacted>
{
"name": "envoy.filters.http.jwt_authn",
"typedConfig": {
"@type": "type.googleapis.com/envoy.config.filter.http.jwt_authn.v2alpha.JwtAuthentication",
"providers": {
"origins-0": {
"issuer": "[email protected]",
"localJwks": {
"inlineString": "*redacted*"
},
"payloadInMetadata": "[email protected]"
}
},
"rules": [
{
"match": {
"prefix": "/"
},
"requires": {
"requiresAny": {
"requirements": [
{
"providerName": "origins-0"
},
{
"allowMissing": {}
}
]
}
}
}
]
}
},
<redacted>
## Authorization is too restrictive or permissive
### Make sure there are no typos in the policy YAML file
One common mistake is specifying multiple items unintentionally in the YAML. Take the following policy as an example:
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: example
namespace: foo
spec:
action: ALLOW
rules:
- to:
- operation:
paths:
- /foo
- from:
- source:
namespaces:
- foo
You may expect the policy to allow requests if the path is `/foo` **and** the source namespace is `foo`.
However, the policy actually allows requests if the path is `/foo` **or** the source namespace is `foo`, which is
more permissive.
In the YAML syntax, the `-` in front of the `from:` means it's a new element in the list. This creates 2 rules in the
policy instead of 1. In authorization policy, multiple rules have the semantics of `OR`.
To fix the problem, just remove the extra `-` to make the policy have only 1 rule that allows requests if the
path is `/foo` **and** the source namespace is `foo`, which is more restrictive.
### Make sure you are NOT using HTTP-only fields on TCP ports
The authorization policy will be more restrictive because HTTP-only fields (e.g. `host`, `path`, `headers`, JWT, etc.)
do not exist in the raw TCP connections.
In the case of `ALLOW` policy, these fields are never matched. In the case of `DENY` and `CUSTOM` action, these fields
are considered always matched. The final effect is a more restrictive policy that could cause unexpected denies.
Check the Kubernetes service definition to verify that the port is [named with the correct protocol properly](/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection).
If you are using HTTP-only fields on the port, make sure the port name has the `http-` prefix.
### Make sure the policy is applied to the correct target
Check the workload selector and namespace to confirm it's applied to the correct targets. You can determine the
authorization policy in effect by running `istioctl x authz check POD-NAME.POD-NAMESPACE`.
### Pay attention to the action specified in the policy
- If not specified, the policy defaults to use action `ALLOW`.
- When a workload has multiple actions (`CUSTOM`, `ALLOW` and `DENY`) applied at the same time, all actions must be
satisfied to allow a request. In other words, a request is denied if any of the action denies and is allowed only if
all actions allow.
- The `AUDIT` action does not enforce access control and will not deny the request at any cases.
Read [authorization implicit enablement](/docs/concepts/security/#implicit-enablement) for more details of the evaluation order.
## Ensure Istiod accepts the policies
Istiod converts and distributes your authorization policies to the proxies. The following steps help
you ensure Istiod is working as expected:
1. Run the following command to enable the debug logging in istiod:
$ istioctl admin log --level authorization:debug
1. Get the Istiod log with the following command:
You probably need to first delete and then re-apply your authorization policies so that
the debug output is generated for these policies.
$ kubectl logs $(kubectl -n istio-system get pods -l app=istiod -o jsonpath='{.items[0].metadata.name}') -c discovery -n istio-system
1. Check the output and verify there are no errors. For example, you might see something similar to the following:
2021-04-23T20:53:29.507314Z info ads Push debounce stable[31] 1: 100.981865ms since last change, 100.981653ms since last push, full=true
2021-04-23T20:53:29.507641Z info ads XDS: Pushing:2021-04-23T20:53:29Z/23 Services:15 ConnectedEndpoints:2 Version:2021-04-23T20:53:29Z/23
2021-04-23T20:53:29.507911Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details:
* found 0 CUSTOM actions
2021-04-23T20:53:29.508077Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:
* found 0 CUSTOM actions
2021-04-23T20:53:29.508128Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details:
* found 1 DENY actions, 0 ALLOW actions, 0 AUDIT actions
* generated config from rule ns[foo]-policy[deny-path-headers]-rule[0] on HTTP filter chain successfully
* built 1 HTTP filters for DENY action
* added 1 HTTP filters to filter chain 0
* added 1 HTTP filters to filter chain 1
2021-04-23T20:53:29.508158Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:
* found 0 DENY actions, 0 ALLOW actions, 0 AUDIT actions
2021-04-23T20:53:29.509097Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:
* found 0 CUSTOM actions
2021-04-23T20:53:29.509167Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:
* found 0 DENY actions, 0 ALLOW actions, 0 AUDIT actions
2021-04-23T20:53:29.509501Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details:
* found 0 CUSTOM actions
2021-04-23T20:53:29.509652Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details:
* found 1 DENY actions, 0 ALLOW actions, 0 AUDIT actions
* generated config from rule ns[foo]-policy[deny-path-headers]-rule[0] on HTTP filter chain successfully
* built 1 HTTP filters for DENY action
* added 1 HTTP filters to filter chain 0
* added 1 HTTP filters to filter chain 1
* generated config from rule ns[foo]-policy[deny-path-headers]-rule[0] on TCP filter chain successfully
* built 1 TCP filters for DENY action
* added 1 TCP filters to filter chain 2
* added 1 TCP filters to filter chain 3
* added 1 TCP filters to filter chain 4
2021-04-23T20:53:29.510903Z info ads LDS: PUSH for node:curl-557747455f-6dxbl.foo resources:18 size:85.0kB
2021-04-23T20:53:29.511487Z info ads LDS: PUSH for node:httpbin-74fb669cc6-lpscm.foo resources:18 size:86.4kB
This shows that Istiod generated:
- An HTTP filter config with policy `ns[foo]-policy[deny-path-headers]-rule[0]` for workload `httpbin-74fb669cc6-lpscm.foo`.
- A TCP filter config with policy `ns[foo]-policy[deny-path-headers]-rule[0]` for workload `httpbin-74fb669cc6-lpscm.foo`.
## Ensure Istiod distributes policies to proxies correctly
Istiod distributes the authorization policies to proxies. The following steps help you ensure istiod is working as expected:
The command below assumes you have deployed `httpbin`, you should replace `"-l app=httpbin"` with your actual pod if
you are not using `httpbin`.
1. Run the following command to get the proxy configuration dump for the `httpbin` workload:
$ kubectl exec $(kubectl get pods -l app=httpbin -o jsonpath='{.items[0].metadata.name}') -c istio-proxy -- pilot-agent request GET config_dump
1. Check the log and verify:
- The log includes an `envoy.filters.http.rbac` filter to enforce the authorization policy on each incoming request.
- Istio updates the filter accordingly after you update your authorization policy.
1. The following output means the proxy of `httpbin` has enabled the `envoy.filters.http.rbac` filter with rules that rejects
anyone to access path `/headers`.
{
"name": "envoy.filters.http.rbac",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.http.rbac.v3.RBAC",
"rules": {
"action": "DENY",
"policies": {
"ns[foo]-policy[deny-path-headers]-rule[0]": {
"permissions": [
{
"and_rules": {
"rules": [
{
"or_rules": {
"rules": [
{
"url_path": {
"path": {
"exact": "/headers"
}
}
}
]
}
}
]
}
}
],
"principals": [
{
"and_ids": {
"ids": [
{
"any": true
}
]
}
}
]
}
}
},
"shadow_rules_stat_prefix": "istio_dry_run_allow_"
}
},
## Ensure proxies enforce policies correctly
Proxies eventually enforce the authorization policies. The following steps help you ensure the proxy is working as expected:
The command below assumes you have deployed `httpbin`, you should replace `"-l app=httpbin"` with your actual pod if you
are not using `httpbin`.
1. Turn on the authorization debug logging in proxy with the following command:
$ istioctl proxy-config log deploy/httpbin --level "rbac:debug"
1. Verify you see the following output:
active loggers:
... ...
rbac: debug
... ...
1. Send some requests to the `httpbin` workload to generate some logs.
1. Print the proxy logs with the following command:
$ kubectl logs $(kubectl get pods -l app=httpbin -o jsonpath='{.items[0].metadata.name}') -c istio-proxy
1. Check the output and verify:
- The output log shows either `enforced allowed` or `enforced denied` depending on whether the request
was allowed or denied respectively.
- Your authorization policy expects the data extracted from the request.
1. The following is an example output for a request at path `/httpbin`:
...
2021-04-23T20:43:18.552857Z debug envoy rbac checking request: requestedServerName: outbound_.8000_._.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:46180, directRemoteIP: 10.44.3.13:46180, remoteIP: 10.44.3.13:46180,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/foo/sa/curl, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000'
':path', '/headers'
':method', 'GET'
':scheme', 'http'
'user-agent', 'curl/7.76.1-DEV'
'accept', '*/*'
'x-forwarded-proto', 'http'
'x-request-id', '672c9166-738c-4865-b541-128259cc65e5'
'x-envoy-attempt-count', '1'
'x-b3-traceid', '8a124905edf4291a21df326729b264e9'
'x-b3-spanid', '21df326729b264e9'
'x-b3-sampled', '0'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject="";URI=spiffe://cluster.local/ns/foo/sa/curl'
, dynamicMetadata: filter_metadata {
key: "istio_authn"
value {
fields {
key: "request.auth.principal"
value {
string_value: "cluster.local/ns/foo/sa/curl"
}
}
fields {
key: "source.namespace"
value {
string_value: "foo"
}
}
fields {
key: "source.principal"
value {
string_value: "cluster.local/ns/foo/sa/curl"
}
}
fields {
key: "source.user"
value {
string_value: "cluster.local/ns/foo/sa/curl"
}
}
}
}
2021-04-23T20:43:18.552910Z debug envoy rbac enforced denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]
...
The log `enforced denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]` means the request is rejected by
the policy `ns[foo]-policy[deny-path-headers]-rule[0]`.
1. The following is an example output for authorization policy in the [dry-run mode](/docs/tasks/security/authorization/authz-dry-run):
...
2021-04-23T20:59:11.838468Z debug envoy rbac checking request: requestedServerName: outbound_.8000_._.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:49826, directRemoteIP: 10.44.3.13:49826, remoteIP: 10.44.3.13:49826,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/foo/sa/curl, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000'
':path', '/headers'
':method', 'GET'
':scheme', 'http'
'user-agent', 'curl/7.76.1-DEV'
'accept', '*/*'
'x-forwarded-proto', 'http'
'x-request-id', 'e7b2fdb0-d2ea-4782-987c-7845939e6313'
'x-envoy-attempt-count', '1'
'x-b3-traceid', '696607fc4382b50017c1f7017054c751'
'x-b3-spanid', '17c1f7017054c751'
'x-b3-sampled', '0'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject="";URI=spiffe://cluster.local/ns/foo/sa/curl'
, dynamicMetadata: filter_metadata {
key: "istio_authn"
value {
fields {
key: "request.auth.principal"
value {
string_value: "cluster.local/ns/foo/sa/curl"
}
}
fields {
key: "source.namespace"
value {
string_value: "foo"
}
}
fields {
key: "source.principal"
value {
string_value: "cluster.local/ns/foo/sa/curl"
}
}
fields {
key: "source.user"
value {
string_value: "cluster.local/ns/foo/sa/curl"
}
}
}
}
2021-04-23T20:59:11.838529Z debug envoy rbac shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]
2021-04-23T20:59:11.838538Z debug envoy rbac no engine, allowed by default
...
The log `shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]` means the request would be rejected
by the **dry-run** policy `ns[foo]-policy[deny-path-headers]-rule[0]`.
The log `no engine, allowed by default` means the request is actually allowed because the dry-run policy is the
only policy on the workload.
## Keys and certificates errors
If you suspect that some of the keys and/or certificates used by Istio aren't correct, you can inspect the contents from any pod:
$ istioctl proxy-config secret curl-8f795f47d-4s4t7
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
default Cert Chain ACTIVE true 138092480869518152837211547060273851586 2020-11-11T16:39:48Z 2020-11-10T16:39:48Z
ROOTCA CA ACTIVE true 288553090258624301170355571152070165215 2030-11-08T16:34:52Z 2020-11-10T16:34:52Z
By passing the `-o json` flag, you can pass the full certificate content to `openssl` to analyze its contents:
$ istioctl proxy-config secret curl-8f795f47d-4s4t7 -o json | jq '[.dynamicActiveSecrets[] | select(.name == "default")][0].secret.tlsCertificate.certificateChain.inlineBytes' -r | base64 -d | openssl x509 -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
99:59:6b:a2:5a:f4:20:f4:03:d7:f0:bc:59:f5:d8:40
Signature Algorithm: sha256WithRSAEncryption
Issuer: O = k8s.cluster.local
Validity
Not Before: Jun 4 20:38:20 2018 GMT
Not After : Sep 2 20:38:20 2018 GMT
...
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Alternative Name:
URI:spiffe://cluster.local/ns/my-ns/sa/my-sa
...
Make sure the displayed certificate contains valid information. In particular, the `Subject Alternative Name` field should be `URI:spiffe://cluster.local/ns/my-ns/sa/my-sa`.
## Mutual TLS errors
If you suspect problems with mutual TLS, first ensure that istiod is healthy, and
second ensure that [keys and certificates are being delivered](#keys-and-certificates-errors) to sidecars properly.
If everything appears to be working so far, the next step is to verify that the right [authentication policy](/docs/tasks/security/authentication/authn-policy/)
is applied and the right destination rules are in place.
If you suspect the client side sidecar may send mutual TLS or plaintext traffic incorrectly, check the
[Grafana Workload dashboard](/docs/ops/integrations/grafana/). The outbound requests are annotated whether mTLS
is used or not. After checking this if you believe the client sidecars are misbehaved, report an issue on GitHub. | istio | title Security Problems description Techniques to address common Istio authentication authorization and general security related problems force inline toc true weight 20 keywords security citadel aliases help ops security repairing citadel help ops troubleshooting repairing citadel docs ops troubleshooting repairing citadel owner istio wg security maintainers test n a End user authentication fails With Istio you can enable authentication for end users through request authentication policies docs tasks security authentication authn policy end user authentication Follow these steps to troubleshoot the policy specification 1 If jwksUri isn t set make sure the JWT issuer is of url format and url well known openid configuration can be opened in browser for example if the JWT issuer is https accounts google com make sure https accounts google com well known openid configuration is a valid url and can be opened in a browser apiVersion security istio io v1 kind RequestAuthentication metadata name example 3 spec selector matchLabels app httpbin jwtRules issuer testing secure istio io jwksUri security tools jwt samples jwks json 1 If the JWT token is placed in the Authorization header in http requests make sure the JWT token is valid not expired etc The fields in a JWT token can be decoded by using online JWT parsing tools e g jwt io https jwt io 1 Verify the Envoy proxy configuration of the target workload using istioctl proxy config command With the example policy above applied use the following command to check the listener configuration on the inbound port 80 You should see envoy filters http jwt authn filter with settings matching the issuer and JWKS as specified in the policy POD kubectl get pod l app httpbin n foo o jsonpath items metadata name istioctl proxy config listener POD n foo port 80 type HTTP o json redacted name envoy filters http jwt authn typedConfig type type googleapis com envoy config filter http jwt authn v2alpha JwtAuthentication providers origins 0 issuer testing secure istio io localJwks inlineString redacted payloadInMetadata testing secure istio io rules match prefix requires requiresAny requirements providerName origins 0 allowMissing redacted Authorization is too restrictive or permissive Make sure there are no typos in the policy YAML file One common mistake is specifying multiple items unintentionally in the YAML Take the following policy as an example apiVersion security istio io v1 kind AuthorizationPolicy metadata name example namespace foo spec action ALLOW rules to operation paths foo from source namespaces foo You may expect the policy to allow requests if the path is foo and the source namespace is foo However the policy actually allows requests if the path is foo or the source namespace is foo which is more permissive In the YAML syntax the in front of the from means it s a new element in the list This creates 2 rules in the policy instead of 1 In authorization policy multiple rules have the semantics of OR To fix the problem just remove the extra to make the policy have only 1 rule that allows requests if the path is foo and the source namespace is foo which is more restrictive Make sure you are NOT using HTTP only fields on TCP ports The authorization policy will be more restrictive because HTTP only fields e g host path headers JWT etc do not exist in the raw TCP connections In the case of ALLOW policy these fields are never matched In the case of DENY and CUSTOM action these fields are considered always matched The final effect is a more restrictive policy that could cause unexpected denies Check the Kubernetes service definition to verify that the port is named with the correct protocol properly docs ops configuration traffic management protocol selection explicit protocol selection If you are using HTTP only fields on the port make sure the port name has the http prefix Make sure the policy is applied to the correct target Check the workload selector and namespace to confirm it s applied to the correct targets You can determine the authorization policy in effect by running istioctl x authz check POD NAME POD NAMESPACE Pay attention to the action specified in the policy If not specified the policy defaults to use action ALLOW When a workload has multiple actions CUSTOM ALLOW and DENY applied at the same time all actions must be satisfied to allow a request In other words a request is denied if any of the action denies and is allowed only if all actions allow The AUDIT action does not enforce access control and will not deny the request at any cases Read authorization implicit enablement docs concepts security implicit enablement for more details of the evaluation order Ensure Istiod accepts the policies Istiod converts and distributes your authorization policies to the proxies The following steps help you ensure Istiod is working as expected 1 Run the following command to enable the debug logging in istiod istioctl admin log level authorization debug 1 Get the Istiod log with the following command You probably need to first delete and then re apply your authorization policies so that the debug output is generated for these policies kubectl logs kubectl n istio system get pods l app istiod o jsonpath items 0 metadata name c discovery n istio system 1 Check the output and verify there are no errors For example you might see something similar to the following 2021 04 23T20 53 29 507314Z info ads Push debounce stable 31 1 100 981865ms since last change 100 981653ms since last push full true 2021 04 23T20 53 29 507641Z info ads XDS Pushing 2021 04 23T20 53 29Z 23 Services 15 ConnectedEndpoints 2 Version 2021 04 23T20 53 29Z 23 2021 04 23T20 53 29 507911Z debug authorization Processed authorization policy for httpbin 74fb669cc6 lpscm foo with details found 0 CUSTOM actions 2021 04 23T20 53 29 508077Z debug authorization Processed authorization policy for curl 557747455f 6dxbl foo with details found 0 CUSTOM actions 2021 04 23T20 53 29 508128Z debug authorization Processed authorization policy for httpbin 74fb669cc6 lpscm foo with details found 1 DENY actions 0 ALLOW actions 0 AUDIT actions generated config from rule ns foo policy deny path headers rule 0 on HTTP filter chain successfully built 1 HTTP filters for DENY action added 1 HTTP filters to filter chain 0 added 1 HTTP filters to filter chain 1 2021 04 23T20 53 29 508158Z debug authorization Processed authorization policy for curl 557747455f 6dxbl foo with details found 0 DENY actions 0 ALLOW actions 0 AUDIT actions 2021 04 23T20 53 29 509097Z debug authorization Processed authorization policy for curl 557747455f 6dxbl foo with details found 0 CUSTOM actions 2021 04 23T20 53 29 509167Z debug authorization Processed authorization policy for curl 557747455f 6dxbl foo with details found 0 DENY actions 0 ALLOW actions 0 AUDIT actions 2021 04 23T20 53 29 509501Z debug authorization Processed authorization policy for httpbin 74fb669cc6 lpscm foo with details found 0 CUSTOM actions 2021 04 23T20 53 29 509652Z debug authorization Processed authorization policy for httpbin 74fb669cc6 lpscm foo with details found 1 DENY actions 0 ALLOW actions 0 AUDIT actions generated config from rule ns foo policy deny path headers rule 0 on HTTP filter chain successfully built 1 HTTP filters for DENY action added 1 HTTP filters to filter chain 0 added 1 HTTP filters to filter chain 1 generated config from rule ns foo policy deny path headers rule 0 on TCP filter chain successfully built 1 TCP filters for DENY action added 1 TCP filters to filter chain 2 added 1 TCP filters to filter chain 3 added 1 TCP filters to filter chain 4 2021 04 23T20 53 29 510903Z info ads LDS PUSH for node curl 557747455f 6dxbl foo resources 18 size 85 0kB 2021 04 23T20 53 29 511487Z info ads LDS PUSH for node httpbin 74fb669cc6 lpscm foo resources 18 size 86 4kB This shows that Istiod generated An HTTP filter config with policy ns foo policy deny path headers rule 0 for workload httpbin 74fb669cc6 lpscm foo A TCP filter config with policy ns foo policy deny path headers rule 0 for workload httpbin 74fb669cc6 lpscm foo Ensure Istiod distributes policies to proxies correctly Istiod distributes the authorization policies to proxies The following steps help you ensure istiod is working as expected The command below assumes you have deployed httpbin you should replace l app httpbin with your actual pod if you are not using httpbin 1 Run the following command to get the proxy configuration dump for the httpbin workload kubectl exec kubectl get pods l app httpbin o jsonpath items 0 metadata name c istio proxy pilot agent request GET config dump 1 Check the log and verify The log includes an envoy filters http rbac filter to enforce the authorization policy on each incoming request Istio updates the filter accordingly after you update your authorization policy 1 The following output means the proxy of httpbin has enabled the envoy filters http rbac filter with rules that rejects anyone to access path headers name envoy filters http rbac typed config type type googleapis com envoy extensions filters http rbac v3 RBAC rules action DENY policies ns foo policy deny path headers rule 0 permissions and rules rules or rules rules url path path exact headers principals and ids ids any true shadow rules stat prefix istio dry run allow Ensure proxies enforce policies correctly Proxies eventually enforce the authorization policies The following steps help you ensure the proxy is working as expected The command below assumes you have deployed httpbin you should replace l app httpbin with your actual pod if you are not using httpbin 1 Turn on the authorization debug logging in proxy with the following command istioctl proxy config log deploy httpbin level rbac debug 1 Verify you see the following output active loggers rbac debug 1 Send some requests to the httpbin workload to generate some logs 1 Print the proxy logs with the following command kubectl logs kubectl get pods l app httpbin o jsonpath items 0 metadata name c istio proxy 1 Check the output and verify The output log shows either enforced allowed or enforced denied depending on whether the request was allowed or denied respectively Your authorization policy expects the data extracted from the request 1 The following is an example output for a request at path httpbin 2021 04 23T20 43 18 552857Z debug envoy rbac checking request requestedServerName outbound 8000 httpbin foo svc cluster local sourceIP 10 44 3 13 46180 directRemoteIP 10 44 3 13 46180 remoteIP 10 44 3 13 46180 localAddress 10 44 1 18 80 ssl uriSanPeerCertificate spiffe cluster local ns foo sa curl dnsSanPeerCertificate subjectPeerCertificate headers authority httpbin 8000 path headers method GET scheme http user agent curl 7 76 1 DEV accept x forwarded proto http x request id 672c9166 738c 4865 b541 128259cc65e5 x envoy attempt count 1 x b3 traceid 8a124905edf4291a21df326729b264e9 x b3 spanid 21df326729b264e9 x b3 sampled 0 x forwarded client cert By spiffe cluster local ns foo sa httpbin Hash d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d Subject URI spiffe cluster local ns foo sa curl dynamicMetadata filter metadata key istio authn value fields key request auth principal value string value cluster local ns foo sa curl fields key source namespace value string value foo fields key source principal value string value cluster local ns foo sa curl fields key source user value string value cluster local ns foo sa curl 2021 04 23T20 43 18 552910Z debug envoy rbac enforced denied matched policy ns foo policy deny path headers rule 0 The log enforced denied matched policy ns foo policy deny path headers rule 0 means the request is rejected by the policy ns foo policy deny path headers rule 0 1 The following is an example output for authorization policy in the dry run mode docs tasks security authorization authz dry run 2021 04 23T20 59 11 838468Z debug envoy rbac checking request requestedServerName outbound 8000 httpbin foo svc cluster local sourceIP 10 44 3 13 49826 directRemoteIP 10 44 3 13 49826 remoteIP 10 44 3 13 49826 localAddress 10 44 1 18 80 ssl uriSanPeerCertificate spiffe cluster local ns foo sa curl dnsSanPeerCertificate subjectPeerCertificate headers authority httpbin 8000 path headers method GET scheme http user agent curl 7 76 1 DEV accept x forwarded proto http x request id e7b2fdb0 d2ea 4782 987c 7845939e6313 x envoy attempt count 1 x b3 traceid 696607fc4382b50017c1f7017054c751 x b3 spanid 17c1f7017054c751 x b3 sampled 0 x forwarded client cert By spiffe cluster local ns foo sa httpbin Hash d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d Subject URI spiffe cluster local ns foo sa curl dynamicMetadata filter metadata key istio authn value fields key request auth principal value string value cluster local ns foo sa curl fields key source namespace value string value foo fields key source principal value string value cluster local ns foo sa curl fields key source user value string value cluster local ns foo sa curl 2021 04 23T20 59 11 838529Z debug envoy rbac shadow denied matched policy ns foo policy deny path headers rule 0 2021 04 23T20 59 11 838538Z debug envoy rbac no engine allowed by default The log shadow denied matched policy ns foo policy deny path headers rule 0 means the request would be rejected by the dry run policy ns foo policy deny path headers rule 0 The log no engine allowed by default means the request is actually allowed because the dry run policy is the only policy on the workload Keys and certificates errors If you suspect that some of the keys and or certificates used by Istio aren t correct you can inspect the contents from any pod istioctl proxy config secret curl 8f795f47d 4s4t7 RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE default Cert Chain ACTIVE true 138092480869518152837211547060273851586 2020 11 11T16 39 48Z 2020 11 10T16 39 48Z ROOTCA CA ACTIVE true 288553090258624301170355571152070165215 2030 11 08T16 34 52Z 2020 11 10T16 34 52Z By passing the o json flag you can pass the full certificate content to openssl to analyze its contents istioctl proxy config secret curl 8f795f47d 4s4t7 o json jq dynamicActiveSecrets select name default 0 secret tlsCertificate certificateChain inlineBytes r base64 d openssl x509 noout text Certificate Data Version 3 0x2 Serial Number 99 59 6b a2 5a f4 20 f4 03 d7 f0 bc 59 f5 d8 40 Signature Algorithm sha256WithRSAEncryption Issuer O k8s cluster local Validity Not Before Jun 4 20 38 20 2018 GMT Not After Sep 2 20 38 20 2018 GMT X509v3 extensions X509v3 Key Usage critical Digital Signature Key Encipherment X509v3 Extended Key Usage TLS Web Server Authentication TLS Web Client Authentication X509v3 Basic Constraints critical CA FALSE X509v3 Subject Alternative Name URI spiffe cluster local ns my ns sa my sa Make sure the displayed certificate contains valid information In particular the Subject Alternative Name field should be URI spiffe cluster local ns my ns sa my sa Mutual TLS errors If you suspect problems with mutual TLS first ensure that istiod is healthy and second ensure that keys and certificates are being delivered keys and certificates errors to sidecars properly If everything appears to be working so far the next step is to verify that the right authentication policy docs tasks security authentication authn policy is applied and the right destination rules are in place If you suspect the client side sidecar may send mutual TLS or plaintext traffic incorrectly check the Grafana Workload dashboard docs ops integrations grafana The outbound requests are annotated whether mTLS is used or not After checking this if you believe the client sidecars are misbehaved report an issue on GitHub |
istio owner istio wg policies and telemetry maintainers EnvoyFilter migration Resolve common problems with Istio upgrades title Upgrade Problems test n a weight 60 | ---
title: Upgrade Problems
description: Resolve common problems with Istio upgrades.
weight: 60
owner: istio/wg-policies-and-telemetry-maintainers
test: n/a
---
## EnvoyFilter migration
`EnvoyFilter` is an alpha API that is tightly coupled to the implementation
details of Istio xDS configuration generation. Production use of the
`EnvoyFilter` alpha API must be carefully curated during the upgrade of Istio's
control or data plane. In many instances, `EnvoyFilter` can be replaced with a
first-class Istio API which carries substantially lower upgrade risks.
### Use Telemetry API for metrics customization
The usage of `IstioOperator` to customize Prometheus metrics generation has been
replaced by the [Telemetry API](/docs/tasks/observability/metrics/customize-metrics/),
because `IstioOperator` relies on a template `EnvoyFilter` to change the
metrics filter configuration. Note that the two methods are incompatible, and
the Telemetry API does not work with `EnvoyFilter` or `IstioOperator` metric
customization configuration.
As an example, the following `IstioOperator` configuration adds a `destination_port` tag:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
telemetry:
v2:
prometheus:
configOverride:
inboundSidecar:
metrics:
- name: requests_total
dimensions:
destination_port: string(destination.port)
The following `Telemetry` configuration replaces the above:
apiVersion: telemetry.istio.io/v1
kind: Telemetry
metadata:
name: namespace-metrics
spec:
metrics:
- providers:
- name: prometheus
overrides:
- match:
metric: REQUEST_COUNT
mode: SERVER
tagOverrides:
destination_port:
value: "string(destination.port)"
### Use the WasmPlugin API for Wasm data plane extensibility
The usage of `EnvoyFilter` to inject Wasm filters has been replaced by the
[WasmPlugin API](/docs/tasks/extensibility/wasm-module-distribution).
WasmPlugin API allows dynamic loading of the plugins from artifact registries,
URLs, or local files. The "Null" plugin runtime is no longer a recommended option
for deployment of Wasm code.
### Use gateway topology to set the number of the trusted hops
The usage of `EnvoyFilter` to configure the number of the trusted hops in the
HTTP connection manager has been replaced by the
[`gatewayTopology`](/docs/reference/config/istio.mesh.v1alpha1/#Topology)
field in
[`ProxyConfig`](/docs/ops/configuration/traffic-management/network-topologies).
For example, the following `EnvoyFilter` configuration should use an annotation
on the pod or the mesh default. Instead of:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: ingressgateway-redirect-config
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
patch:
operation: MERGE
value:
typed_config:
'@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
xff_num_trusted_hops: 1
workloadSelector:
labels:
istio: ingress-gateway
Use the equivalent ingress gateway pod proxy configuration annotation:
metadata:
annotations:
"proxy.istio.io/config": '{"gatewayTopology" : { "numTrustedProxies": 1 }}'
### Use gateway topology to enable PROXY protocol on the ingress gateways
The usage of `EnvoyFilter` to enable [PROXY
protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) on the
ingress gateways has been replaced by the
[`gatewayTopology`](/docs/reference/config/istio.mesh.v1alpha1/#Topology)
field in
[`ProxyConfig`](/docs/ops/configuration/traffic-management/network-topologies).
For example, the following `EnvoyFilter` configuration should use an annotation
on the pod or the mesh default. Instead of:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: proxy-protocol
spec:
configPatches:
- applyTo: LISTENER_FILTER
patch:
operation: INSERT_FIRST
value:
name: proxy_protocol
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.listener.proxy_protocol.v3.ProxyProtocol"
workloadSelector:
labels:
istio: ingress-gateway
Use the equivalent ingress gateway pod proxy configuration annotation:
metadata:
annotations:
"proxy.istio.io/config": '{"gatewayTopology" : { "proxyProtocol": {} }}'
### Use a proxy annotation to customize the histogram bucket sizes
The usage of `EnvoyFilter` and the experimental bootstrap discovery service to
configure the bucket sizes for the histogram metrics has been replaced by the
proxy annotation `sidecar.istio.io/statsHistogramBuckets`. For example, the
following `EnvoyFilter` configuration should use an annotation on the pod.
Instead of:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: envoy-stats-1
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: BOOTSTRAP
patch:
operation: MERGE
value:
stats_config:
histogram_bucket_settings:
- match:
prefix: istiocustom
buckets: [1,5,50,500,5000,10000]
Use the equivalent pod annotation:
metadata:
annotations:
"sidecar.istio.io/statsHistogramBuckets": '{"istiocustom":[1,5,50,500,5000,10000]}'
| istio | title Upgrade Problems description Resolve common problems with Istio upgrades weight 60 owner istio wg policies and telemetry maintainers test n a EnvoyFilter migration EnvoyFilter is an alpha API that is tightly coupled to the implementation details of Istio xDS configuration generation Production use of the EnvoyFilter alpha API must be carefully curated during the upgrade of Istio s control or data plane In many instances EnvoyFilter can be replaced with a first class Istio API which carries substantially lower upgrade risks Use Telemetry API for metrics customization The usage of IstioOperator to customize Prometheus metrics generation has been replaced by the Telemetry API docs tasks observability metrics customize metrics because IstioOperator relies on a template EnvoyFilter to change the metrics filter configuration Note that the two methods are incompatible and the Telemetry API does not work with EnvoyFilter or IstioOperator metric customization configuration As an example the following IstioOperator configuration adds a destination port tag apiVersion install istio io v1alpha1 kind IstioOperator spec values telemetry v2 prometheus configOverride inboundSidecar metrics name requests total dimensions destination port string destination port The following Telemetry configuration replaces the above apiVersion telemetry istio io v1 kind Telemetry metadata name namespace metrics spec metrics providers name prometheus overrides match metric REQUEST COUNT mode SERVER tagOverrides destination port value string destination port Use the WasmPlugin API for Wasm data plane extensibility The usage of EnvoyFilter to inject Wasm filters has been replaced by the WasmPlugin API docs tasks extensibility wasm module distribution WasmPlugin API allows dynamic loading of the plugins from artifact registries URLs or local files The Null plugin runtime is no longer a recommended option for deployment of Wasm code Use gateway topology to set the number of the trusted hops The usage of EnvoyFilter to configure the number of the trusted hops in the HTTP connection manager has been replaced by the gatewayTopology docs reference config istio mesh v1alpha1 Topology field in ProxyConfig docs ops configuration traffic management network topologies For example the following EnvoyFilter configuration should use an annotation on the pod or the mesh default Instead of apiVersion networking istio io v1alpha3 kind EnvoyFilter metadata name ingressgateway redirect config spec configPatches applyTo NETWORK FILTER match context GATEWAY listener filterChain filter name envoy filters network http connection manager patch operation MERGE value typed config type type googleapis com envoy extensions filters network http connection manager v3 HttpConnectionManager xff num trusted hops 1 workloadSelector labels istio ingress gateway Use the equivalent ingress gateway pod proxy configuration annotation metadata annotations proxy istio io config gatewayTopology numTrustedProxies 1 Use gateway topology to enable PROXY protocol on the ingress gateways The usage of EnvoyFilter to enable PROXY protocol https www haproxy org download 1 8 doc proxy protocol txt on the ingress gateways has been replaced by the gatewayTopology docs reference config istio mesh v1alpha1 Topology field in ProxyConfig docs ops configuration traffic management network topologies For example the following EnvoyFilter configuration should use an annotation on the pod or the mesh default Instead of apiVersion networking istio io v1alpha3 kind EnvoyFilter metadata name proxy protocol spec configPatches applyTo LISTENER FILTER patch operation INSERT FIRST value name proxy protocol typed config type type googleapis com envoy extensions filters listener proxy protocol v3 ProxyProtocol workloadSelector labels istio ingress gateway Use the equivalent ingress gateway pod proxy configuration annotation metadata annotations proxy istio io config gatewayTopology proxyProtocol Use a proxy annotation to customize the histogram bucket sizes The usage of EnvoyFilter and the experimental bootstrap discovery service to configure the bucket sizes for the histogram metrics has been replaced by the proxy annotation sidecar istio io statsHistogramBuckets For example the following EnvoyFilter configuration should use an annotation on the pod Instead of apiVersion networking istio io v1alpha3 kind EnvoyFilter metadata name envoy stats 1 namespace istio system spec workloadSelector labels istio ingressgateway configPatches applyTo BOOTSTRAP patch operation MERGE value stats config histogram bucket settings match prefix istiocustom buckets 1 5 50 500 5000 10000 Use the equivalent pod annotation metadata annotations sidecar istio io statsHistogramBuckets istiocustom 1 5 50 500 5000 10000 |
istio help ops troubleshooting validation aliases help ops setup validation title Configuration Validation Problems owner istio wg user experience maintainers Describes how to resolve configuration validation problems forceinlinetoc true docs ops troubleshooting validation weight 50 | ---
title: Configuration Validation Problems
description: Describes how to resolve configuration validation problems.
force_inline_toc: true
weight: 50
aliases:
- /help/ops/setup/validation
- /help/ops/troubleshooting/validation
- /docs/ops/troubleshooting/validation
owner: istio/wg-user-experience-maintainers
test: no
---
## Seemingly valid configuration is rejected
Use [istioctl validate -f](/docs/reference/commands/istioctl/#istioctl-validate) and [istioctl analyze](/docs/reference/commands/istioctl/#istioctl-analyze) for more insight into why the configuration is rejected. Use an _istioctl_ CLI with a similar version to the control plane version.
The most commonly reported problems with configuration are YAML indentation and array notation (`-`) mistakes.
Manually verify your configuration is correct, cross-referencing
[Istio API reference](/docs/reference/config) when
necessary.
## Invalid configuration is accepted
Verify that a `validatingwebhookconfiguration` named `istio-validator-` followed by
`<revision>-`, if not the default revision, followed by the Istio system namespace
(e.g., `istio-validator-myrev-istio-system`) exists and is correct.
The `apiVersion`, `apiGroup`, and `resource` of the
invalid configuration should be listed in the `webhooks` section of the `validatingwebhookconfiguration`.
$ kubectl get validatingwebhookconfiguration istio-validator-istio-system -o yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app: istiod
install.operator.istio.io/owning-resource-namespace: istio-system
istio: istiod
istio.io/rev: default
operator.istio.io/component: Pilot
operator.istio.io/managed: Reconcile
operator.istio.io/version: unknown
release: istio
name: istio-validator-istio-system
resourceVersion: "615569"
uid: 112fed62-93e7-41c9-8cb1-b2665f392dd7
webhooks:
- admissionReviewVersions:
- v1beta1
- v1
clientConfig:
# caBundle should be non-empty. This is periodically (re)patched
# every second by the webhook service using the ca-cert
# from the mounted service account secret.
caBundle: LS0t...
# service corresponds to the Kubernetes service that implements the webhook
service:
name: istiod
namespace: istio-system
path: /validate
port: 443
failurePolicy: Fail
matchPolicy: Equivalent
name: rev.validation.istio.io
namespaceSelector: {}
objectSelector:
matchExpressions:
- key: istio.io/rev
operator: In
values:
- default
rules:
- apiGroups:
- security.istio.io
- networking.istio.io
- telemetry.istio.io
- extensions.istio.io
apiVersions:
- '*'
operations:
- CREATE
- UPDATE
resources:
- '*'
scope: '*'
sideEffects: None
timeoutSeconds: 10
If the `istio-validator-` webhook does not exist, verify
the `global.configValidation` installation option is
set to `true`.
The validation configuration is fail-close. If
configuration exists and is scoped properly, the webhook will be
invoked. A missing `caBundle`, bad certificate, or network connectivity
problem will produce an error message when the resource is
created/updated. If you don’t see any error message and the webhook
wasn’t invoked and the webhook configuration is valid, your cluster is
misconfigured.
## Creating configuration fails with x509 certificate errors
`x509: certificate signed by unknown authority` related errors are
typically caused by an empty `caBundle` in the webhook
configuration. Verify that it is not empty (see [verify webhook
configuration](#invalid-configuration-is-accepted)). Istio consciously reconciles webhook configuration
used the `istio-validation` `configmap` and root certificate.
1. Verify the `istiod` pod(s) are running:
$ kubectl -n istio-system get pod -lapp=istiod
NAME READY STATUS RESTARTS AGE
istiod-5dbbbdb746-d676g 1/1 Running 0 2d
1. Check the pod logs for errors. Failing to patch the
`caBundle` should print an error.
$ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o jsonpath='{.items[*].metadata.name}'); do \
kubectl -n istio-system logs ${pod} \
done
1. If the patching failed, verify the RBAC configuration for Istiod:
$ kubectl get clusterrole istiod-istio-system -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
name: istiod-istio-system
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- '*'
Istio needs `validatingwebhookconfigurations` write access to
create and update the `validatingwebhookconfiguration`.
## Creating configuration fails with `no such hosts` or `no endpoints available` errors
Validation is fail-close. If the `istiod` pod is not ready,
configuration cannot be created and updated. In such cases you’ll see
an error about `no endpoints available`.
Verify the `istiod` pod(s) are running and endpoints are ready.
$ kubectl -n istio-system get pod -lapp=istiod
NAME READY STATUS RESTARTS AGE
istiod-5dbbbdb746-d676g 1/1 Running 0 2d
$ kubectl -n istio-system get endpoints istiod
NAME ENDPOINTS AGE
istiod 10.48.6.108:15014,10.48.6.108:443 3d
If the pods or endpoints aren't ready, check the pod logs and
status for any indication about why the webhook pod is failing to start
and serve traffic.
$ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o jsonpath='{.items[*].metadata.name}'); do \
kubectl -n istio-system logs ${pod} \
done
$ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o name); do \
kubectl -n istio-system describe ${pod} \
done
| istio | title Configuration Validation Problems description Describes how to resolve configuration validation problems force inline toc true weight 50 aliases help ops setup validation help ops troubleshooting validation docs ops troubleshooting validation owner istio wg user experience maintainers test no Seemingly valid configuration is rejected Use istioctl validate f docs reference commands istioctl istioctl validate and istioctl analyze docs reference commands istioctl istioctl analyze for more insight into why the configuration is rejected Use an istioctl CLI with a similar version to the control plane version The most commonly reported problems with configuration are YAML indentation and array notation mistakes Manually verify your configuration is correct cross referencing Istio API reference docs reference config when necessary Invalid configuration is accepted Verify that a validatingwebhookconfiguration named istio validator followed by revision if not the default revision followed by the Istio system namespace e g istio validator myrev istio system exists and is correct The apiVersion apiGroup and resource of the invalid configuration should be listed in the webhooks section of the validatingwebhookconfiguration kubectl get validatingwebhookconfiguration istio validator istio system o yaml apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration metadata labels app istiod install operator istio io owning resource namespace istio system istio istiod istio io rev default operator istio io component Pilot operator istio io managed Reconcile operator istio io version unknown release istio name istio validator istio system resourceVersion 615569 uid 112fed62 93e7 41c9 8cb1 b2665f392dd7 webhooks admissionReviewVersions v1beta1 v1 clientConfig caBundle should be non empty This is periodically re patched every second by the webhook service using the ca cert from the mounted service account secret caBundle LS0t service corresponds to the Kubernetes service that implements the webhook service name istiod namespace istio system path validate port 443 failurePolicy Fail matchPolicy Equivalent name rev validation istio io namespaceSelector objectSelector matchExpressions key istio io rev operator In values default rules apiGroups security istio io networking istio io telemetry istio io extensions istio io apiVersions operations CREATE UPDATE resources scope sideEffects None timeoutSeconds 10 If the istio validator webhook does not exist verify the global configValidation installation option is set to true The validation configuration is fail close If configuration exists and is scoped properly the webhook will be invoked A missing caBundle bad certificate or network connectivity problem will produce an error message when the resource is created updated If you don t see any error message and the webhook wasn t invoked and the webhook configuration is valid your cluster is misconfigured Creating configuration fails with x509 certificate errors x509 certificate signed by unknown authority related errors are typically caused by an empty caBundle in the webhook configuration Verify that it is not empty see verify webhook configuration invalid configuration is accepted Istio consciously reconciles webhook configuration used the istio validation configmap and root certificate 1 Verify the istiod pod s are running kubectl n istio system get pod lapp istiod NAME READY STATUS RESTARTS AGE istiod 5dbbbdb746 d676g 1 1 Running 0 2d 1 Check the pod logs for errors Failing to patch the caBundle should print an error for pod in kubectl n istio system get pod lapp istiod o jsonpath items metadata name do kubectl n istio system logs pod done 1 If the patching failed verify the RBAC configuration for Istiod kubectl get clusterrole istiod istio system o yaml apiVersion rbac authorization k8s io v1 kind ClusterRole name istiod istio system rules apiGroups admissionregistration k8s io resources validatingwebhookconfigurations verbs Istio needs validatingwebhookconfigurations write access to create and update the validatingwebhookconfiguration Creating configuration fails with no such hosts or no endpoints available errors Validation is fail close If the istiod pod is not ready configuration cannot be created and updated In such cases you ll see an error about no endpoints available Verify the istiod pod s are running and endpoints are ready kubectl n istio system get pod lapp istiod NAME READY STATUS RESTARTS AGE istiod 5dbbbdb746 d676g 1 1 Running 0 2d kubectl n istio system get endpoints istiod NAME ENDPOINTS AGE istiod 10 48 6 108 15014 10 48 6 108 443 3d If the pods or endpoints aren t ready check the pod logs and status for any indication about why the webhook pod is failing to start and serve traffic for pod in kubectl n istio system get pod lapp istiod o jsonpath items metadata name do kubectl n istio system logs pod done for pod in kubectl n istio system get pod lapp istiod o name do kubectl n istio system describe pod done |
istio weight 30 owner istio wg security maintainers Istio security features provide strong identity powerful policy transparent TLS encryption and authentication authorization and audit AAA tools to protect your services and data forceinlinetoc true title Security Best Practices test n a Best practices for securing applications using Istio | ---
title: Security Best Practices
description: Best practices for securing applications using Istio.
force_inline_toc: true
weight: 30
owner: istio/wg-security-maintainers
test: n/a
---
Istio security features provide strong identity, powerful policy, transparent TLS encryption, and authentication, authorization and audit (AAA) tools to protect your services and data.
However, to fully make use of these features securely, care must be taken to follow best practices. It is recommended to review the [Security overview](/docs/concepts/security/) before proceeding.
## Mutual TLS
Istio will [automatically](/docs/ops/configuration/traffic-management/tls-configuration/#auto-mtls) encrypt traffic using [Mutual TLS](/docs/concepts/security/#mutual-tls-authentication) whenever possible.
However, proxies are configured in [permissive mode](/docs/concepts/security/#permissive-mode) by default, meaning they will accept both mutual TLS and plaintext traffic.
While this is required for incremental adoption or allowing traffic from clients without an Istio sidecar, it also weakens the security stance.
It is recommended to [migrate to strict mode](/docs/tasks/security/authentication/mtls-migration/) when possible, to enforce that mutual TLS is used.
Mutual TLS alone is not always enough to fully secure traffic, however, as it provides only authentication, not authorization.
This means that anyone with a valid certificate can still access a service.
To fully lock down traffic, it is recommended to configure [authorization policies](/docs/tasks/security/authorization/).
These allow creating fine-grained policies to allow or deny traffic. For example, you can allow only requests from the `app` namespace to access the `hello-world` service.
## Authorization policies
Istio [authorization](/docs/concepts/security/#authorization) plays a critical part in Istio security.
It takes effort to configure the correct authorization policies to best protect your clusters.
It is important to understand the implications of these configurations as Istio cannot determine the proper authorization for all users.
Please follow this section in its entirety.
### Safer Authorization Policy Patterns
#### Use default-deny patterns
We recommend you define your Istio authorization policies following the default-deny pattern to enhance your cluster's security posture.
The default-deny authorization pattern means your system denies all requests by default, and you define the conditions in which the requests are allowed.
In case you miss some conditions, traffic will be unexpectedly denied, instead of traffic being unexpectedly allowed.
The latter typically being a security incident while the former may result in a poor user experience, a service outage or will not match your SLO/SLA.
For example, in the [authorization for HTTP traffic task](/docs/tasks/security/authorization/authz-http/),
the authorization policy named `allow-nothing` makes sure all traffic is denied by default.
From there, other authorization policies allow traffic based on specific conditions.
#### Use `ALLOW-with-positive-matching` and `DENY-with-negative-match` patterns
Use the `ALLOW-with-positive-matching` or `DENY-with-negative-matching` patterns whenever possible. These authorization policy
patterns are safer because the worst result in the case of policy mismatch is an unexpected 403 rejection instead of
an authorization policy bypass.
The `ALLOW-with-positive-matching` pattern is to use the `ALLOW` action only with **positive** matching fields (e.g. `paths`, `values`)
and do not use any of the **negative** matching fields (e.g. `notPaths`, `notValues`).
The `DENY-with-negative-matching` pattern is to use the `DENY` action only with **negative** matching fields (e.g. `notPaths`, `notValues`)
and do not use any of the **positive** matching fields (e.g. `paths`, `values`).
For example, the authorization policy below uses the `ALLOW-with-positive-matching` pattern to allow requests to path `/public`:
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: foo
spec:
action: ALLOW
rules:
- to:
- operation:
paths: ["/public"]
The above policy explicitly lists the allowed path (`/public`). This means the request path must be exactly the same as
`/public` to allow the request. Any other requests will be rejected by default eliminating the risk
of unknown normalization behavior causing policy bypass.
The following is an example using the `DENY-with-negative-matching` pattern to achieve the same result:
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: foo
spec:
action: DENY
rules:
- to:
- operation:
notPaths: ["/public"]
### Understand path normalization in authorization policy
The enforcement point for authorization policies is the Envoy proxy instead of the usual resource access point in the backend application. A policy mismatch happens when the Envoy proxy and the backend application interpret the request
differently.
A mismatch can lead to either unexpected rejection or a policy bypass. The latter is usually a security incident that needs to be
fixed immediately, and it's also why we need path normalization in the authorization policy.
For example, consider an authorization policy to reject requests with path `/data/secret`. A request with path `/data//secret` will
not be rejected because it does not match the path defined in the authorization policy due to the extra forward slash `/` in the path.
The request goes through and later the backend application returns the same response that it returns for the path `/data/secret`
because the backend application normalizes the path `/data//secret` to `/data/secret` as it considers the double forward slashes
`//` equivalent to a single forward slash `/`.
In this example, the policy enforcement point (Envoy proxy) had a different understanding of the path than the resource access
point (backend application). The different understanding caused the mismatch and subsequently the bypass of the authorization policy.
This becomes a complicated problem because of the following factors:
* Lack of a clear standard for the normalization.
* Backends and frameworks in different layers have their own special normalization.
* Applications can even have arbitrary normalizations for their own use cases.
Istio authorization policy implements built-in support of various basic normalization options to help you to better address
the problem:
* Refer to [Guideline on configuring the path normalization option](/docs/ops/best-practices/security/#guideline-on-configuring-the-path-normalization-option)
to understand which normalization options you may want to use.
* Refer to [Customize your system on path normalization](/docs/ops/best-practices/security/#customize-your-system-on-path-normalization) to
understand the detail of each normalization option.
* Refer to [Mitigation for unsupported normalization](/docs/ops/best-practices/security/#mitigation-for-unsupported-normalization) for
alternative solutions in case you need any unsupported normalization options.
### Guideline on configuring the path normalization option
#### Case 1: You do not need normalization at all
Before diving into the details of configuring normalization, you should first make sure that normalizations are needed.
You do not need normalization if you don't use authorization policies or if your authorization policies don't
use any `path` fields.
You may not need normalization if all your authorization policies follow the [safer authorization pattern](/docs/ops/best-practices/security/#safer-authorization-policy-patterns)
which, in the worst case, results in unexpected rejection instead of policy bypass.
#### Case 2: You need normalization but not sure which normalization option to use
You need normalization but you have no idea of which option to use. The safest choice is the strictest normalization option
that provides the maximum level of normalization in the authorization policy.
This is often the case due to the fact that complicated multi-layered systems make it practically impossible to figure
out what normalization is actually happening to a request beyond the enforcement point.
You could use a less strict normalization option if it already satisfies your requirements and you are sure of its implications.
For either option, make sure you write both positive and negative tests specifically for your requirements to verify the
normalization is working as expected. The tests are useful in catching potential bypass issues caused by a misunderstanding
or incomplete knowledge of the normalization happening to your request.
Refer to [Customize your system on path normalization](/docs/ops/best-practices/security/#customize-your-system-on-path-normalization)
for more details on configuring the normalization option.
#### Case 3: You need an unsupported normalization option
If you need a specific normalization option that is not supported by Istio yet, please follow
[Mitigation for unsupported normalization](/docs/ops/best-practices/security/#mitigation-for-unsupported-normalization)
for customized normalization support or create a feature request for the Istio community.
### Customize your system on path normalization
Istio authorization policies can be based on the URL paths in the HTTP request.
[Path normalization (a.k.a., URI normalization)](https://en.wikipedia.org/wiki/URI_normalization) modifies and standardizes the incoming requests' paths,
so that the normalized paths can be processed in a standard way.
Syntactically different paths may be equivalent after path normalization.
Istio supports the following normalization schemes on the request paths,
before evaluating against the authorization policies and routing the requests:
| Option | Description | Example |
| --- | --- | --- |
| `NONE` | No normalization is done. Anything received by Envoy will be forwarded exactly as-is to any backend service. | `../%2Fa../b` is evaluated by the authorization policies and sent to your service. |
| `BASE` | This is currently the option used in the *default* installation of Istio. This applies the [`normalize_path`](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#envoy-v3-api-field-extensions-filters-network-http-connection-manager-v3-httpconnectionmanager-normalize-path) option on Envoy proxies, which follows [RFC 3986](https://tools.ietf.org/html/rfc3986) with extra normalization to convert backslashes to forward slashes. | `/a/../b` is normalized to `/b`. `\da` is normalized to `/da`. |
| `MERGE_SLASHES` | Slashes are merged after the _BASE_ normalization. | `/a//b` is normalized to `/a/b`. |
| `DECODE_AND_MERGE_SLASHES` | The most strict setting when you allow all traffic by default. This setting is recommended, with the caveat that you will need to thoroughly test your authorization policies routes. [Percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1) slash and backslash characters (`%2F`, `%2f`, `%5C` and `%5c`) are decoded to `/` or `\`, before the `MERGE_SLASHES` normalization. | `/a%2fb` is normalized to `/a/b`. |
The configuration is specified via the [`pathNormalization`](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ProxyPathNormalization)
field in the [mesh config](/docs/reference/config/istio.mesh.v1alpha1/).
To emphasize, the normalization algorithms are conducted in the following order:
1. Percent-decode `%2F`, `%2f`, `%5C` and `%5c`.
1. The [RFC 3986](https://tools.ietf.org/html/rfc3986) and other normalization implemented by the [`normalize_path`](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#envoy-v3-api-field-extensions-filters-network-http-connection-manager-v3-httpconnectionmanager-normalize-path) option in Envoy.
1. Merge slashes
While these normalization options represent recommendations from HTTP standards and common industry practices,
applications may interpret a URL in any way it chooses to. When using denial policies, ensure that you understand how your application behaves.
For a complete list of supported normalizations, please refer to [authorization policy normalization](/docs/reference/config/security/normalization/).
#### Examples of configuration
Ensuring Envoy normalizes request paths to match your backend services' expectation is critical to the security of your system.
The following examples can be used as reference for you to configure your system.
The normalized URL paths, or the original URL paths if _NONE_ is selected, will be:
1. Used to check against the authorization policies
1. Forwarded to the backend application
| Your application... | Choose... |
| --- | --- |
| Relies on the proxy to do normalization | `BASE`, `MERGE_SLASHES` or `DECODE_AND_MERGE_SLASHES` |
| Normalizes request paths based on [RFC 3986](https://tools.ietf.org/html/rfc3986) and does not merge slashes | `BASE` |
| Normalizes request paths based on [RFC 3986](https://tools.ietf.org/html/rfc3986), merges slashes but does not decode [percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1) slashes | `MERGE_SLASHES` |
| Normalizes request paths based on [RFC 3986](https://tools.ietf.org/html/rfc3986), decodes [percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1) slashes and merges slashes | `DECODE_AND_MERGE_SLASHES` |
| Processes request paths in a way that is incompatible with [RFC 3986](https://tools.ietf.org/html/rfc3986) | `NONE` |
#### How to configure
You can use `istioctl` to update the [mesh config](/docs/reference/config/istio.mesh.v1alpha1/):
$ istioctl upgrade --set meshConfig.pathNormalization.normalization=DECODE_AND_MERGE_SLASHES
or by altering your operator overrides file
$ cat <<EOF > iop.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
pathNormalization:
normalization: DECODE_AND_MERGE_SLASHES
EOF
$ istioctl install -f iop.yaml
Alternatively, if you want to directly edit the mesh config,
you can add the [`pathNormalization`](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ProxyPathNormalization)
to the [mesh config](/docs/reference/config/istio.mesh.v1alpha1/), which is the `istio-<REVISION_ID>` configmap in the `istio-system` namespace.
For example, if you choose the `DECODE_AND_MERGE_SLASHES` option, you modify the mesh config as the following:
apiVersion: v1
data:
mesh: |-
...
pathNormalization:
normalization: DECODE_AND_MERGE_SLASHES
...
### Mitigation for unsupported normalization
This section describes various mitigations for unsupported normalization. These could be useful when you need a specific
normalization that is not supported by Istio.
Please make sure you understand the mitigation thoroughly and use it carefully as some mitigations rely on things that are
out the scope of Istio and also not supported by Istio.
#### Custom normalization logic
You can apply custom normalization logic using the WASM or Lua filter. It is recommended to use the WASM filter because
it's officially supported and also used by Istio. You could use the Lua filter for a quick proof-of-concept DEMO but we do
not recommend using the Lua filter in production because it is not supported by Istio.
##### Example custom normalization (case normalization)
In some environments, it may be useful to have paths in authorization policies compared in a case insensitive manner.
For example, treating `https://myurl/get` and `https://myurl/GeT` as equivalent.
In those cases, the `EnvoyFilter` shown below can be used to insert a Lua filter to normalize the path to lower case.
This filter will change both the path used for comparison and the path presented to the application.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: ingress-case-insensitive
namespace: istio-system
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
patch:
operation: INSERT_FIRST
value:
name: envoy.lua
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
inlineCode: |
function envoy_on_request(request_handle)
local path = request_handle:headers():get(":path")
request_handle:headers():replace(":path", string.lower(path))
end
#### Writing Host Match Policies
Istio generates hostnames for both the hostname itself and all matching ports. For instance, a virtual service or Gateway
for a host of `example.com` generates a config matching `example.com` and `example.com:*`. However, exact match authorization
policies only match the exact string given for the `hosts` or `notHosts` fields.
[Authorization policy rules](/docs/reference/config/security/authorization-policy/#Rule) matching hosts should be written using
prefix matches instead of exact matches. For example, for an `AuthorizationPolicy` matching the Envoy configuration generated
for a hostname of `example.com`, you would use `hosts: ["example.com", "example.com:*"]` as shown in the below `AuthorizationPolicy`.
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: ingress-host
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: DENY
rules:
- to:
- operation:
hosts: ["example.com", "example.com:*"]
Additionally, the `host` and `notHosts` fields should generally only be used on gateway for external traffic entering the mesh
and not on sidecars for traffic within the mesh. This is because the sidecar on server side (where the authorization policy is enforced)
does not use the `Host` header when redirecting the request to the application. This makes the `host` and `notHost` meaningless
on sidecar because a client could reach out to the application using explicit IP address and arbitrary `Host` header instead of
the service name.
If you really need to enforce access control based on the `Host` header on sidecars for any reason, follow with the [default-deny patterns](/docs/ops/best-practices/security/#use-default-deny-patterns)
which would reject the request if the client uses an arbitrary `Host` header.
#### Specialized Web Application Firewall (WAF)
Many specialized Web Application Firewall (WAF) products provide additional normalization options. They can be deployed in
front of the Istio ingress gateway to normalize requests entering the mesh. The authorization policy will then be enforced
on the normalized requests. Please refer to your specific WAF product for configuring the normalization options.
#### Feature request to Istio
If you believe Istio should officially support a specific normalization, you can follow the [reporting a vulnerability](/docs/releases/security-vulnerabilities/#reporting-a-vulnerability)
page to send a feature request about the specific normalization to the Istio Product Security Work Group for initial evaluation.
Please do not open any issues in public without first contacting the Istio Product Security Work Group because the
issue might be considered a security vulnerability that needs to be fixed in private.
If the Istio Product Security Work Group evaluates the feature request as not a security vulnerability, an issue will
be opened in public for further discussions of the feature request.
### Known limitations
This section lists known limitations of the authorization policy.
#### Server-first TCP protocols are not supported
Server-first TCP protocols mean the server application will send the first bytes right after accepting the TCP connection
before receiving any data from the client.
Currently, the authorization policy only supports enforcing access control on inbound traffic and not the outbound traffic.
It also does not support server-first TCP protocols because the first bytes are sent by the server application even before
it received any data from the client. In this case, the initial first bytes sent by the server are returned to the client
directly without going through the access control check of the authorization policy.
You should not use the authorization policy if the first bytes sent by the server-first TCP protocols include any sensitive
data that need to be protected by proper authorization.
You could still use the authorization policy in this case if the first bytes does not include any sensitive data, for example,
the first bytes are used for negotiating the connection with data that are publicly accessible to any clients. The authorization
policy will work as usual for the following requests sent by the client after the first bytes.
## Understand traffic capture limitations
The Istio sidecar works by capturing both inbound traffic and outbound traffic and directing them through the sidecar proxy.
However, not *all* traffic is captured:
* Redirection only handles TCP based traffic. Any UDP or ICMP packets will not be captured or modified.
* Inbound capture is disabled on many [ports used by the sidecar](/docs/ops/deployment/application-requirements/#ports-used-by-istio) as well as port 22. This list can be expanded by options like `traffic.sidecar.istio.io/excludeInboundPorts`.
* Outbound capture may similarly be reduced through settings like `traffic.sidecar.istio.io/excludeOutboundPorts` or other means.
In general, there is minimal security boundary between an application and its sidecar proxy. Configuration of the sidecar is allowed on a per-pod basis, and both run in the same network/process namespace.
As such, the application may have the ability to remove redirection rules and remove, alter, terminate, or replace the sidecar proxy.
This allows a pod to intentionally bypass its sidecar for outbound traffic or intentionally allow inbound traffic to bypass its sidecar.
As a result, it is not secure to rely on all traffic being captured unconditionally by Istio.
Instead, the security boundary is that a client may not bypass *another* pod's sidecar.
For example, if I run the `reviews` application on port `9080`, I can assume that all traffic from the `productpage` application will be captured by the sidecar proxy,
where Istio authentication and authorization policies may apply.
### Defense in depth with `NetworkPolicy`
To further secure traffic, Istio policies can be layered with Kubernetes [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/).
This enables a strong [defense in depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)) strategy that can be used to further strengthen the security of your mesh.
For example, you may choose to only allow traffic to port `9080` of our `reviews` application.
In the event of a compromised pod or security vulnerability in the cluster, this may limit or stop an attackers progress.
Depending on the actual implementation, changes to network policy may not affect existing connections in the Istio proxies.
You may need to restart the Istio proxies after applying the policy so that existing connections will be closed and
new connections will be subject to the new policy.
### Securing egress traffic
A common misconception is that options like [`outboundTrafficPolicy: REGISTRY_ONLY`](/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services) acts as a security policy preventing all access to undeclared services.
However, this is not a strong security boundary as mentioned above, and should be considered best-effort.
While this is useful to prevent accidental dependencies, if you want to secure egress traffic, and enforce all outbound traffic goes through a proxy, you should instead rely on an [Egress Gateway](/docs/tasks/traffic-management/egress/egress-gateway/).
When combined with a [Network Policy](/docs/tasks/traffic-management/egress/egress-gateway/#apply-kubernetes-network-policies), you can enforce all traffic, or some subset, goes through the egress gateway.
This ensures that even if a client accidentally or maliciously bypasses their sidecar, the request will be blocked.
## Configure TLS verification in Destination Rule when using TLS origination
Istio offers the ability to [originate TLS](/docs/tasks/traffic-management/egress/egress-tls-origination/) from a sidecar proxy or gateway.
This enables applications that send plaintext HTTP traffic to be transparently "upgraded" to HTTPS.
Care must be taken when configuring the `DestinationRule`'s `tls` setting to specify the `caCertificates`, `subjectAltNames`, and `sni` fields.
The `caCertificate` can be automatically set from the system's certificate store's CA certificate by enabling the environment variable `VERIFY_CERTIFICATE_AT_CLIENT=true` on Istiod.
If the Operating System CA certificate being automatically used is only desired for select host(s), the environment variable `VERIFY_CERTIFICATE_AT_CLIENT=false` on Istiod, `caCertificates` can be set to `system` in the desired `DestinationRule`(s).
Specifying the `caCertificates` in a `DestinationRule` will take priority and the OS CA Cert will not be used.
By default, egress traffic does not send SNI during the TLS handshake.
SNI must be set in the `DestinationRule` to ensure the host properly handle the request.
In order to verify the server's certificate it is important that both `caCertificates` and `subjectAltNames` be set.
Verification of the certificate presented by the server against a CA is not sufficient, as the Subject Alternative Names must also be validated.
If `VERIFY_CERTIFICATE_AT_CLIENT` is set, but `subjectAltNames` is not set then you are not verifying all credentials.
If no CA certificate is being used, `subjectAltNames` will not be used regardless of it being set or not.
For example:
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: google-tls
spec:
host: google.com
trafficPolicy:
tls:
mode: SIMPLE
caCertificates: /etc/ssl/certs/ca-certificates.crt
subjectAltNames:
- "google.com"
sni: "google.com"
## Gateways
When running an Istio [gateway](/docs/tasks/traffic-management/ingress/), there are a few resources involved:
* `Gateway`s, which controls the ports and TLS settings for the gateway.
* `VirtualService`s, which control the routing logic. These are associated with `Gateway`s by direct reference in the `gateways` field and a mutual agreement on the `hosts` field in the `Gateway` and `VirtualService`.
### Restrict `Gateway` creation privileges
It is recommended to restrict creation of Gateway resources to trusted cluster administrators. This can be achieved by [Kubernetes RBAC policies](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) or tools like [Open Policy Agent](https://www.openpolicyagent.org/).
### Avoid overly broad `hosts` configurations
When possible, avoid overly broad `hosts` settings in `Gateway`.
For example, this configuration will allow any `VirtualService` to bind to the `Gateway`, potentially exposing unexpected domains:
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
This should be locked down to allow only specific domains or specific namespaces:
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "foo.example.com" # Allow only VirtualServices that are for foo.example.com
- "default/bar.example.com" # Allow only VirtualServices in the default namespace that are for bar.example.com
- "route-namespace/*" # Allow only VirtualServices in the route-namespace namespace for any host
### Isolate sensitive services
It may be desired to enforce stricter physical isolation for sensitive services. For example, you may want to run a
[dedicated gateway instance](/docs/setup/install/istioctl/#configure-gateways) for a sensitive `payments.example.com`, while utilizing a single
shared gateway instance for less sensitive domains like `blog.example.com` and `store.example.com`.
This can offer a stronger defense-in-depth and help meet certain regulatory compliance guidelines.
### Explicitly disable all the sensitive http host under relaxed SNI host matching
It is reasonable to use multiple `Gateway`s to define mutual TLS and simple TLS on different hosts.
For example, use mutual TLS for SNI host `admin.example.com` and simple TLS for SNI host `*.example.com`.
kind: Gateway
metadata:
name: guestgateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*.example.com"
tls:
mode: SIMPLE
---
kind: Gateway
metadata:
name: admingateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- admin.example.com
tls:
mode: MUTUAL
If the above is necessary, it's highly recommended to explicitly disable the http host `admin.example.com` in the `VirtualService` that attaches to `*.example.com`. The reason is that currently the underlying [envoy proxy does not require](https://github.com/envoyproxy/envoy/issues/6767) the http 1 header `Host` or the http 2 pseudo header `:authority` following the SNI constraints, an attacker can reuse the guest-SNI TLS connection to access admin `VirtualService`. The http response code 421 is designed for this `Host` SNI mismatch and can be used to fulfill the disable.
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: disable-sensitive
spec:
hosts:
- "admin.example.com"
gateways:
- guestgateway
http:
- match:
- uri:
prefix: /
fault:
abort:
percentage:
value: 100
httpStatus: 421
route:
- destination:
port:
number: 8000
host: dest.default.cluster.local
## Protocol detection
Istio will [automatically determine the protocol](/docs/ops/configuration/traffic-management/protocol-selection/#automatic-protocol-selection) of traffic it sees.
To avoid accidental or intentional miss detection, which may result in unexpected traffic behavior, it is recommended to [explicitly declare the protocol](/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection) where possible.
## CNI
In order to transparently capture all traffic, Istio relies on `iptables` rules configured by the `istio-init` `initContainer`.
This adds a [requirement](/docs/ops/deployment/application-requirements/) for the `NET_ADMIN` and `NET_RAW` [capabilities](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container) to be available to the pod.
To reduce privileges granted to pods, Istio offers a [CNI plugin](/docs/setup/additional-setup/cni/) which removes this requirement.
## Use hardened docker images
Istio's default docker images, including those run by the control plane, gateway, and sidecar proxies, are based on `ubuntu`.
This provides various tools such as `bash` and `curl`, which trades off convenience for an increase attack surface.
Istio also offers a smaller image based on [distroless images](/docs/ops/configuration/security/harden-docker-images/) that reduces the dependencies in the image.
Distroless images are currently an alpha feature.
## Release and security policy
In order to ensure your cluster has the latest security patches for known vulnerabilities, it is important to stay on the latest patch release of Istio and ensure that you are on a [supported release](/docs/releases/supported-releases) that is still receiving security patches.
## Detect invalid configurations
While Istio provides validation of resources when they are created, these checks cannot catch all issues preventing configuration being distributed in the mesh.
This could result in applying a policy that is unexpectedly ignored, leading to unexpected results.
* Run `istioctl analyze` before or after applying configuration to ensure it is valid.
* Monitor the control plane for rejected configurations. These are exposed by the `pilot_total_xds_rejects` metric, in addition to logs.
* Test your configuration to ensure it gives the expected results.
For a security policy, it is useful to run positive and negative tests to ensure you do not accidentally restrict too much or too few traffic.
## Avoid alpha and experimental features
All Istio features and APIs are assigned a [feature status](/docs/releases/feature-stages/), defining its stability, deprecation policy, and security policy.
Because alpha and experimental features do not have as strong security guarantees, it is recommended to avoid them whenever possible.
Security issues found in these features may not be fixed immediately or otherwise not follow our standard [security vulnerability](/docs/releases/security-vulnerabilities/) process.
To determine the feature status of features in use in your cluster, consult the [Istio features](/docs/releases/feature-stages/#istio-features) list.
<!-- In the future, we should document the `istioctl` command to check this when available. -->
## Lock down ports
Istio configures a [variety of ports](/docs/ops/deployment/application-requirements/#ports-used-by-istio) that may be locked down to improve security.
### Control Plane
Istiod exposes a few unauthenticated plaintext ports for convenience by default. If desired, these can be closed:
* Port `8080` exposes the debug interface, which offers read access to a variety of details about the clusters state.
This can be disabled by set the environment variable `ENABLE_DEBUG_ON_HTTP=false` on Istiod. Warning: many `istioctl` commands
depend on this interface and will not function if it is disabled.
* Port `15010` exposes the XDS service over plaintext. This can be disabled by adding the `--grpcAddr=""` flag to the Istiod Deployment.
Note: highly sensitive services, such as the certificate signing and distribution services, are never served over plaintext.
### Data Plane
The proxy exposes a variety of ports. Exposed externally are port `15090` (telemetry) and port `15021` (health check).
Ports `15020` and `15000` provide debugging endpoints. These are exposed over `localhost` only.
As a result, the applications running in the same pod as the proxy have access; there is no trust boundary between the sidecar and application.
## Configure third party service account tokens
To authenticate with the Istio control plane, the Istio proxy will use a Service Account token. Kubernetes supports two forms of these tokens:
* Third party tokens, which have a scoped audience and expiration.
* First party tokens, which have no expiration and are mounted into all pods.
Because the properties of the first party token are less secure, Istio will default to using third party tokens. However, this feature is not enabled on all Kubernetes platforms.
If you are using `istioctl` to install, support will be automatically detected. This can be done manually as well, and configured by passing `--set values.global.jwtPolicy=third-party-jwt` or `--set values.global.jwtPolicy=first-party-jwt`.
To determine if your cluster supports third party tokens, look for the `TokenRequest` API. If this returns no response, then the feature is not supported:
$ kubectl get --raw /api/v1 | jq '.resources[] | select(.name | index("serviceaccounts/token"))'
{
"name": "serviceaccounts/token",
"singularName": "",
"namespaced": true,
"group": "authentication.k8s.io",
"version": "v1",
"kind": "TokenRequest",
"verbs": [
"create"
]
}
While most cloud providers support this feature now, many local development tools and custom installations may not prior to Kubernetes 1.20. To enable this feature, please refer to the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection).
## Configure a limit on downstream connections
By default, Istio (and Envoy) have no limit on the number of downstream connections. This can be exploited by a malicious actor (see [security bulletin 2020-007](/news/security/istio-security-2020-007/)). To work around you this, you must configure an appropriate connection limit for your environment.
### Configure `global_downstream_max_connections` value
The following configuration can be supplied during installation:
meshConfig:
defaultConfig:
runtimeValues:
"overload.global_downstream_max_connections": "100000"
| istio | title Security Best Practices description Best practices for securing applications using Istio force inline toc true weight 30 owner istio wg security maintainers test n a Istio security features provide strong identity powerful policy transparent TLS encryption and authentication authorization and audit AAA tools to protect your services and data However to fully make use of these features securely care must be taken to follow best practices It is recommended to review the Security overview docs concepts security before proceeding Mutual TLS Istio will automatically docs ops configuration traffic management tls configuration auto mtls encrypt traffic using Mutual TLS docs concepts security mutual tls authentication whenever possible However proxies are configured in permissive mode docs concepts security permissive mode by default meaning they will accept both mutual TLS and plaintext traffic While this is required for incremental adoption or allowing traffic from clients without an Istio sidecar it also weakens the security stance It is recommended to migrate to strict mode docs tasks security authentication mtls migration when possible to enforce that mutual TLS is used Mutual TLS alone is not always enough to fully secure traffic however as it provides only authentication not authorization This means that anyone with a valid certificate can still access a service To fully lock down traffic it is recommended to configure authorization policies docs tasks security authorization These allow creating fine grained policies to allow or deny traffic For example you can allow only requests from the app namespace to access the hello world service Authorization policies Istio authorization docs concepts security authorization plays a critical part in Istio security It takes effort to configure the correct authorization policies to best protect your clusters It is important to understand the implications of these configurations as Istio cannot determine the proper authorization for all users Please follow this section in its entirety Safer Authorization Policy Patterns Use default deny patterns We recommend you define your Istio authorization policies following the default deny pattern to enhance your cluster s security posture The default deny authorization pattern means your system denies all requests by default and you define the conditions in which the requests are allowed In case you miss some conditions traffic will be unexpectedly denied instead of traffic being unexpectedly allowed The latter typically being a security incident while the former may result in a poor user experience a service outage or will not match your SLO SLA For example in the authorization for HTTP traffic task docs tasks security authorization authz http the authorization policy named allow nothing makes sure all traffic is denied by default From there other authorization policies allow traffic based on specific conditions Use ALLOW with positive matching and DENY with negative match patterns Use the ALLOW with positive matching or DENY with negative matching patterns whenever possible These authorization policy patterns are safer because the worst result in the case of policy mismatch is an unexpected 403 rejection instead of an authorization policy bypass The ALLOW with positive matching pattern is to use the ALLOW action only with positive matching fields e g paths values and do not use any of the negative matching fields e g notPaths notValues The DENY with negative matching pattern is to use the DENY action only with negative matching fields e g notPaths notValues and do not use any of the positive matching fields e g paths values For example the authorization policy below uses the ALLOW with positive matching pattern to allow requests to path public apiVersion security istio io v1 kind AuthorizationPolicy metadata name foo spec action ALLOW rules to operation paths public The above policy explicitly lists the allowed path public This means the request path must be exactly the same as public to allow the request Any other requests will be rejected by default eliminating the risk of unknown normalization behavior causing policy bypass The following is an example using the DENY with negative matching pattern to achieve the same result apiVersion security istio io v1 kind AuthorizationPolicy metadata name foo spec action DENY rules to operation notPaths public Understand path normalization in authorization policy The enforcement point for authorization policies is the Envoy proxy instead of the usual resource access point in the backend application A policy mismatch happens when the Envoy proxy and the backend application interpret the request differently A mismatch can lead to either unexpected rejection or a policy bypass The latter is usually a security incident that needs to be fixed immediately and it s also why we need path normalization in the authorization policy For example consider an authorization policy to reject requests with path data secret A request with path data secret will not be rejected because it does not match the path defined in the authorization policy due to the extra forward slash in the path The request goes through and later the backend application returns the same response that it returns for the path data secret because the backend application normalizes the path data secret to data secret as it considers the double forward slashes equivalent to a single forward slash In this example the policy enforcement point Envoy proxy had a different understanding of the path than the resource access point backend application The different understanding caused the mismatch and subsequently the bypass of the authorization policy This becomes a complicated problem because of the following factors Lack of a clear standard for the normalization Backends and frameworks in different layers have their own special normalization Applications can even have arbitrary normalizations for their own use cases Istio authorization policy implements built in support of various basic normalization options to help you to better address the problem Refer to Guideline on configuring the path normalization option docs ops best practices security guideline on configuring the path normalization option to understand which normalization options you may want to use Refer to Customize your system on path normalization docs ops best practices security customize your system on path normalization to understand the detail of each normalization option Refer to Mitigation for unsupported normalization docs ops best practices security mitigation for unsupported normalization for alternative solutions in case you need any unsupported normalization options Guideline on configuring the path normalization option Case 1 You do not need normalization at all Before diving into the details of configuring normalization you should first make sure that normalizations are needed You do not need normalization if you don t use authorization policies or if your authorization policies don t use any path fields You may not need normalization if all your authorization policies follow the safer authorization pattern docs ops best practices security safer authorization policy patterns which in the worst case results in unexpected rejection instead of policy bypass Case 2 You need normalization but not sure which normalization option to use You need normalization but you have no idea of which option to use The safest choice is the strictest normalization option that provides the maximum level of normalization in the authorization policy This is often the case due to the fact that complicated multi layered systems make it practically impossible to figure out what normalization is actually happening to a request beyond the enforcement point You could use a less strict normalization option if it already satisfies your requirements and you are sure of its implications For either option make sure you write both positive and negative tests specifically for your requirements to verify the normalization is working as expected The tests are useful in catching potential bypass issues caused by a misunderstanding or incomplete knowledge of the normalization happening to your request Refer to Customize your system on path normalization docs ops best practices security customize your system on path normalization for more details on configuring the normalization option Case 3 You need an unsupported normalization option If you need a specific normalization option that is not supported by Istio yet please follow Mitigation for unsupported normalization docs ops best practices security mitigation for unsupported normalization for customized normalization support or create a feature request for the Istio community Customize your system on path normalization Istio authorization policies can be based on the URL paths in the HTTP request Path normalization a k a URI normalization https en wikipedia org wiki URI normalization modifies and standardizes the incoming requests paths so that the normalized paths can be processed in a standard way Syntactically different paths may be equivalent after path normalization Istio supports the following normalization schemes on the request paths before evaluating against the authorization policies and routing the requests Option Description Example NONE No normalization is done Anything received by Envoy will be forwarded exactly as is to any backend service 2Fa b is evaluated by the authorization policies and sent to your service BASE This is currently the option used in the default installation of Istio This applies the normalize path https www envoyproxy io docs envoy latest api v3 extensions filters network http connection manager v3 http connection manager proto envoy v3 api field extensions filters network http connection manager v3 httpconnectionmanager normalize path option on Envoy proxies which follows RFC 3986 https tools ietf org html rfc3986 with extra normalization to convert backslashes to forward slashes a b is normalized to b da is normalized to da MERGE SLASHES Slashes are merged after the BASE normalization a b is normalized to a b DECODE AND MERGE SLASHES The most strict setting when you allow all traffic by default This setting is recommended with the caveat that you will need to thoroughly test your authorization policies routes Percent encoded https tools ietf org html rfc3986 section 2 1 slash and backslash characters 2F 2f 5C and 5c are decoded to or before the MERGE SLASHES normalization a 2fb is normalized to a b The configuration is specified via the pathNormalization docs reference config istio mesh v1alpha1 MeshConfig ProxyPathNormalization field in the mesh config docs reference config istio mesh v1alpha1 To emphasize the normalization algorithms are conducted in the following order 1 Percent decode 2F 2f 5C and 5c 1 The RFC 3986 https tools ietf org html rfc3986 and other normalization implemented by the normalize path https www envoyproxy io docs envoy latest api v3 extensions filters network http connection manager v3 http connection manager proto envoy v3 api field extensions filters network http connection manager v3 httpconnectionmanager normalize path option in Envoy 1 Merge slashes While these normalization options represent recommendations from HTTP standards and common industry practices applications may interpret a URL in any way it chooses to When using denial policies ensure that you understand how your application behaves For a complete list of supported normalizations please refer to authorization policy normalization docs reference config security normalization Examples of configuration Ensuring Envoy normalizes request paths to match your backend services expectation is critical to the security of your system The following examples can be used as reference for you to configure your system The normalized URL paths or the original URL paths if NONE is selected will be 1 Used to check against the authorization policies 1 Forwarded to the backend application Your application Choose Relies on the proxy to do normalization BASE MERGE SLASHES or DECODE AND MERGE SLASHES Normalizes request paths based on RFC 3986 https tools ietf org html rfc3986 and does not merge slashes BASE Normalizes request paths based on RFC 3986 https tools ietf org html rfc3986 merges slashes but does not decode percent encoded https tools ietf org html rfc3986 section 2 1 slashes MERGE SLASHES Normalizes request paths based on RFC 3986 https tools ietf org html rfc3986 decodes percent encoded https tools ietf org html rfc3986 section 2 1 slashes and merges slashes DECODE AND MERGE SLASHES Processes request paths in a way that is incompatible with RFC 3986 https tools ietf org html rfc3986 NONE How to configure You can use istioctl to update the mesh config docs reference config istio mesh v1alpha1 istioctl upgrade set meshConfig pathNormalization normalization DECODE AND MERGE SLASHES or by altering your operator overrides file cat EOF iop yaml apiVersion install istio io v1alpha1 kind IstioOperator spec meshConfig pathNormalization normalization DECODE AND MERGE SLASHES EOF istioctl install f iop yaml Alternatively if you want to directly edit the mesh config you can add the pathNormalization docs reference config istio mesh v1alpha1 MeshConfig ProxyPathNormalization to the mesh config docs reference config istio mesh v1alpha1 which is the istio REVISION ID configmap in the istio system namespace For example if you choose the DECODE AND MERGE SLASHES option you modify the mesh config as the following apiVersion v1 data mesh pathNormalization normalization DECODE AND MERGE SLASHES Mitigation for unsupported normalization This section describes various mitigations for unsupported normalization These could be useful when you need a specific normalization that is not supported by Istio Please make sure you understand the mitigation thoroughly and use it carefully as some mitigations rely on things that are out the scope of Istio and also not supported by Istio Custom normalization logic You can apply custom normalization logic using the WASM or Lua filter It is recommended to use the WASM filter because it s officially supported and also used by Istio You could use the Lua filter for a quick proof of concept DEMO but we do not recommend using the Lua filter in production because it is not supported by Istio Example custom normalization case normalization In some environments it may be useful to have paths in authorization policies compared in a case insensitive manner For example treating https myurl get and https myurl GeT as equivalent In those cases the EnvoyFilter shown below can be used to insert a Lua filter to normalize the path to lower case This filter will change both the path used for comparison and the path presented to the application apiVersion networking istio io v1alpha3 kind EnvoyFilter metadata name ingress case insensitive namespace istio system spec configPatches applyTo HTTP FILTER match context GATEWAY listener filterChain filter name envoy filters network http connection manager patch operation INSERT FIRST value name envoy lua typed config type type googleapis com envoy extensions filters http lua v3 Lua inlineCode function envoy on request request handle local path request handle headers get path request handle headers replace path string lower path end Writing Host Match Policies Istio generates hostnames for both the hostname itself and all matching ports For instance a virtual service or Gateway for a host of example com generates a config matching example com and example com However exact match authorization policies only match the exact string given for the hosts or notHosts fields Authorization policy rules docs reference config security authorization policy Rule matching hosts should be written using prefix matches instead of exact matches For example for an AuthorizationPolicy matching the Envoy configuration generated for a hostname of example com you would use hosts example com example com as shown in the below AuthorizationPolicy apiVersion security istio io v1 kind AuthorizationPolicy metadata name ingress host namespace istio system spec selector matchLabels app istio ingressgateway action DENY rules to operation hosts example com example com Additionally the host and notHosts fields should generally only be used on gateway for external traffic entering the mesh and not on sidecars for traffic within the mesh This is because the sidecar on server side where the authorization policy is enforced does not use the Host header when redirecting the request to the application This makes the host and notHost meaningless on sidecar because a client could reach out to the application using explicit IP address and arbitrary Host header instead of the service name If you really need to enforce access control based on the Host header on sidecars for any reason follow with the default deny patterns docs ops best practices security use default deny patterns which would reject the request if the client uses an arbitrary Host header Specialized Web Application Firewall WAF Many specialized Web Application Firewall WAF products provide additional normalization options They can be deployed in front of the Istio ingress gateway to normalize requests entering the mesh The authorization policy will then be enforced on the normalized requests Please refer to your specific WAF product for configuring the normalization options Feature request to Istio If you believe Istio should officially support a specific normalization you can follow the reporting a vulnerability docs releases security vulnerabilities reporting a vulnerability page to send a feature request about the specific normalization to the Istio Product Security Work Group for initial evaluation Please do not open any issues in public without first contacting the Istio Product Security Work Group because the issue might be considered a security vulnerability that needs to be fixed in private If the Istio Product Security Work Group evaluates the feature request as not a security vulnerability an issue will be opened in public for further discussions of the feature request Known limitations This section lists known limitations of the authorization policy Server first TCP protocols are not supported Server first TCP protocols mean the server application will send the first bytes right after accepting the TCP connection before receiving any data from the client Currently the authorization policy only supports enforcing access control on inbound traffic and not the outbound traffic It also does not support server first TCP protocols because the first bytes are sent by the server application even before it received any data from the client In this case the initial first bytes sent by the server are returned to the client directly without going through the access control check of the authorization policy You should not use the authorization policy if the first bytes sent by the server first TCP protocols include any sensitive data that need to be protected by proper authorization You could still use the authorization policy in this case if the first bytes does not include any sensitive data for example the first bytes are used for negotiating the connection with data that are publicly accessible to any clients The authorization policy will work as usual for the following requests sent by the client after the first bytes Understand traffic capture limitations The Istio sidecar works by capturing both inbound traffic and outbound traffic and directing them through the sidecar proxy However not all traffic is captured Redirection only handles TCP based traffic Any UDP or ICMP packets will not be captured or modified Inbound capture is disabled on many ports used by the sidecar docs ops deployment application requirements ports used by istio as well as port 22 This list can be expanded by options like traffic sidecar istio io excludeInboundPorts Outbound capture may similarly be reduced through settings like traffic sidecar istio io excludeOutboundPorts or other means In general there is minimal security boundary between an application and its sidecar proxy Configuration of the sidecar is allowed on a per pod basis and both run in the same network process namespace As such the application may have the ability to remove redirection rules and remove alter terminate or replace the sidecar proxy This allows a pod to intentionally bypass its sidecar for outbound traffic or intentionally allow inbound traffic to bypass its sidecar As a result it is not secure to rely on all traffic being captured unconditionally by Istio Instead the security boundary is that a client may not bypass another pod s sidecar For example if I run the reviews application on port 9080 I can assume that all traffic from the productpage application will be captured by the sidecar proxy where Istio authentication and authorization policies may apply Defense in depth with NetworkPolicy To further secure traffic Istio policies can be layered with Kubernetes Network Policies https kubernetes io docs concepts services networking network policies This enables a strong defense in depth https en wikipedia org wiki Defense in depth computing strategy that can be used to further strengthen the security of your mesh For example you may choose to only allow traffic to port 9080 of our reviews application In the event of a compromised pod or security vulnerability in the cluster this may limit or stop an attackers progress Depending on the actual implementation changes to network policy may not affect existing connections in the Istio proxies You may need to restart the Istio proxies after applying the policy so that existing connections will be closed and new connections will be subject to the new policy Securing egress traffic A common misconception is that options like outboundTrafficPolicy REGISTRY ONLY docs tasks traffic management egress egress control envoy passthrough to external services acts as a security policy preventing all access to undeclared services However this is not a strong security boundary as mentioned above and should be considered best effort While this is useful to prevent accidental dependencies if you want to secure egress traffic and enforce all outbound traffic goes through a proxy you should instead rely on an Egress Gateway docs tasks traffic management egress egress gateway When combined with a Network Policy docs tasks traffic management egress egress gateway apply kubernetes network policies you can enforce all traffic or some subset goes through the egress gateway This ensures that even if a client accidentally or maliciously bypasses their sidecar the request will be blocked Configure TLS verification in Destination Rule when using TLS origination Istio offers the ability to originate TLS docs tasks traffic management egress egress tls origination from a sidecar proxy or gateway This enables applications that send plaintext HTTP traffic to be transparently upgraded to HTTPS Care must be taken when configuring the DestinationRule s tls setting to specify the caCertificates subjectAltNames and sni fields The caCertificate can be automatically set from the system s certificate store s CA certificate by enabling the environment variable VERIFY CERTIFICATE AT CLIENT true on Istiod If the Operating System CA certificate being automatically used is only desired for select host s the environment variable VERIFY CERTIFICATE AT CLIENT false on Istiod caCertificates can be set to system in the desired DestinationRule s Specifying the caCertificates in a DestinationRule will take priority and the OS CA Cert will not be used By default egress traffic does not send SNI during the TLS handshake SNI must be set in the DestinationRule to ensure the host properly handle the request In order to verify the server s certificate it is important that both caCertificates and subjectAltNames be set Verification of the certificate presented by the server against a CA is not sufficient as the Subject Alternative Names must also be validated If VERIFY CERTIFICATE AT CLIENT is set but subjectAltNames is not set then you are not verifying all credentials If no CA certificate is being used subjectAltNames will not be used regardless of it being set or not For example apiVersion networking istio io v1 kind DestinationRule metadata name google tls spec host google com trafficPolicy tls mode SIMPLE caCertificates etc ssl certs ca certificates crt subjectAltNames google com sni google com Gateways When running an Istio gateway docs tasks traffic management ingress there are a few resources involved Gateway s which controls the ports and TLS settings for the gateway VirtualService s which control the routing logic These are associated with Gateway s by direct reference in the gateways field and a mutual agreement on the hosts field in the Gateway and VirtualService Restrict Gateway creation privileges It is recommended to restrict creation of Gateway resources to trusted cluster administrators This can be achieved by Kubernetes RBAC policies https kubernetes io docs reference access authn authz rbac or tools like Open Policy Agent https www openpolicyagent org Avoid overly broad hosts configurations When possible avoid overly broad hosts settings in Gateway For example this configuration will allow any VirtualService to bind to the Gateway potentially exposing unexpected domains servers port number 80 name http protocol HTTP hosts This should be locked down to allow only specific domains or specific namespaces servers port number 80 name http protocol HTTP hosts foo example com Allow only VirtualServices that are for foo example com default bar example com Allow only VirtualServices in the default namespace that are for bar example com route namespace Allow only VirtualServices in the route namespace namespace for any host Isolate sensitive services It may be desired to enforce stricter physical isolation for sensitive services For example you may want to run a dedicated gateway instance docs setup install istioctl configure gateways for a sensitive payments example com while utilizing a single shared gateway instance for less sensitive domains like blog example com and store example com This can offer a stronger defense in depth and help meet certain regulatory compliance guidelines Explicitly disable all the sensitive http host under relaxed SNI host matching It is reasonable to use multiple Gateway s to define mutual TLS and simple TLS on different hosts For example use mutual TLS for SNI host admin example com and simple TLS for SNI host example com kind Gateway metadata name guestgateway spec selector istio ingressgateway servers port number 443 name https protocol HTTPS hosts example com tls mode SIMPLE kind Gateway metadata name admingateway spec selector istio ingressgateway servers port number 443 name https protocol HTTPS hosts admin example com tls mode MUTUAL If the above is necessary it s highly recommended to explicitly disable the http host admin example com in the VirtualService that attaches to example com The reason is that currently the underlying envoy proxy does not require https github com envoyproxy envoy issues 6767 the http 1 header Host or the http 2 pseudo header authority following the SNI constraints an attacker can reuse the guest SNI TLS connection to access admin VirtualService The http response code 421 is designed for this Host SNI mismatch and can be used to fulfill the disable apiVersion networking istio io v1 kind VirtualService metadata name disable sensitive spec hosts admin example com gateways guestgateway http match uri prefix fault abort percentage value 100 httpStatus 421 route destination port number 8000 host dest default cluster local Protocol detection Istio will automatically determine the protocol docs ops configuration traffic management protocol selection automatic protocol selection of traffic it sees To avoid accidental or intentional miss detection which may result in unexpected traffic behavior it is recommended to explicitly declare the protocol docs ops configuration traffic management protocol selection explicit protocol selection where possible CNI In order to transparently capture all traffic Istio relies on iptables rules configured by the istio init initContainer This adds a requirement docs ops deployment application requirements for the NET ADMIN and NET RAW capabilities https kubernetes io docs tasks configure pod container security context set capabilities for a container to be available to the pod To reduce privileges granted to pods Istio offers a CNI plugin docs setup additional setup cni which removes this requirement Use hardened docker images Istio s default docker images including those run by the control plane gateway and sidecar proxies are based on ubuntu This provides various tools such as bash and curl which trades off convenience for an increase attack surface Istio also offers a smaller image based on distroless images docs ops configuration security harden docker images that reduces the dependencies in the image Distroless images are currently an alpha feature Release and security policy In order to ensure your cluster has the latest security patches for known vulnerabilities it is important to stay on the latest patch release of Istio and ensure that you are on a supported release docs releases supported releases that is still receiving security patches Detect invalid configurations While Istio provides validation of resources when they are created these checks cannot catch all issues preventing configuration being distributed in the mesh This could result in applying a policy that is unexpectedly ignored leading to unexpected results Run istioctl analyze before or after applying configuration to ensure it is valid Monitor the control plane for rejected configurations These are exposed by the pilot total xds rejects metric in addition to logs Test your configuration to ensure it gives the expected results For a security policy it is useful to run positive and negative tests to ensure you do not accidentally restrict too much or too few traffic Avoid alpha and experimental features All Istio features and APIs are assigned a feature status docs releases feature stages defining its stability deprecation policy and security policy Because alpha and experimental features do not have as strong security guarantees it is recommended to avoid them whenever possible Security issues found in these features may not be fixed immediately or otherwise not follow our standard security vulnerability docs releases security vulnerabilities process To determine the feature status of features in use in your cluster consult the Istio features docs releases feature stages istio features list In the future we should document the istioctl command to check this when available Lock down ports Istio configures a variety of ports docs ops deployment application requirements ports used by istio that may be locked down to improve security Control Plane Istiod exposes a few unauthenticated plaintext ports for convenience by default If desired these can be closed Port 8080 exposes the debug interface which offers read access to a variety of details about the clusters state This can be disabled by set the environment variable ENABLE DEBUG ON HTTP false on Istiod Warning many istioctl commands depend on this interface and will not function if it is disabled Port 15010 exposes the XDS service over plaintext This can be disabled by adding the grpcAddr flag to the Istiod Deployment Note highly sensitive services such as the certificate signing and distribution services are never served over plaintext Data Plane The proxy exposes a variety of ports Exposed externally are port 15090 telemetry and port 15021 health check Ports 15020 and 15000 provide debugging endpoints These are exposed over localhost only As a result the applications running in the same pod as the proxy have access there is no trust boundary between the sidecar and application Configure third party service account tokens To authenticate with the Istio control plane the Istio proxy will use a Service Account token Kubernetes supports two forms of these tokens Third party tokens which have a scoped audience and expiration First party tokens which have no expiration and are mounted into all pods Because the properties of the first party token are less secure Istio will default to using third party tokens However this feature is not enabled on all Kubernetes platforms If you are using istioctl to install support will be automatically detected This can be done manually as well and configured by passing set values global jwtPolicy third party jwt or set values global jwtPolicy first party jwt To determine if your cluster supports third party tokens look for the TokenRequest API If this returns no response then the feature is not supported kubectl get raw api v1 jq resources select name index serviceaccounts token name serviceaccounts token singularName namespaced true group authentication k8s io version v1 kind TokenRequest verbs create While most cloud providers support this feature now many local development tools and custom installations may not prior to Kubernetes 1 20 To enable this feature please refer to the Kubernetes documentation https kubernetes io docs tasks configure pod container configure service account service account token volume projection Configure a limit on downstream connections By default Istio and Envoy have no limit on the number of downstream connections This can be exploited by a malicious actor see security bulletin 2020 007 news security istio security 2020 007 To work around you this you must configure an appropriate connection limit for your environment Configure global downstream max connections value The following configuration can be supplied during installation meshConfig defaultConfig runtimeValues overload global downstream max connections 100000 |
Subsets and Splits