questions
stringlengths 4
1.65k
| answers
stringlengths 1.73k
353k
| site
stringclasses 24
values | answers_cleaned
stringlengths 1.73k
353k
|
---|---|---|---|
vault page title Migration Guide Active Directory Secrets Engines The Vault Active Directory secrets engine vault docs secrets ad has been deprecated as Migration guide active directory secrets engine The guide for migrating from the Active Directory secrets engine to the LDAP secrets engine layout docs | ---
layout: docs
page_title: Migration Guide - Active Directory - Secrets Engines
description: >-
The guide for migrating from the Active Directory secrets engine to the LDAP secrets engine.
---
# Migration guide - active directory secrets engine
The Vault [Active Directory secrets engine](/vault/docs/secrets/ad) has been deprecated as
of the Vault 1.13 release. This document provides guidance for migrating from the Active
Directory secrets engine to the [LDAP secrets engine](/vault/docs/secrets/ldap) that was
introduced in Vault 1.12.
## Deprecation timeline
Beginning from the Vault 1.13 release, we will continue to support the Active Directory (AD)
secrets engine in maintenance mode for six major Vault releases. Maintenance mode means that
we will fix bugs and security issues, but no new features will be added. All new feature
development efforts will go towards the unified LDAP secrets engine. At Vault 1.18, we will
mark the AD secrets engine as [pending removal](/vault/docs/deprecation/faq#pending-removal).
At this time, Vault will begin to strongly signal operators that they need to migrate off of
the AD secrets engine. At Vault 1.19, we will mark the AD secrets engine as
[removed](/vault/docs/deprecation/faq#removed). At this time, the AD secrets engine will be
removed from Vault. Vault will not start up with the AD secrets engine mounts enabled.
## Migration steps
The following sections detail how to migrate the AD secrets engine configuration and
applications consuming secrets to the new LDAP secrets engine.
### 1. enable LDAP secrets engine
The LDAP secrets engine needs to be enabled in order to have a target for migration of
existing AD secrets engine mounts. AD secrets engine mounts should be mapped 1-to-1 with
new LDAP secrets engine mounts.
To enable the LDAP secrets engine:
```shell-session
$ vault secrets enable ldap
```
To enable at a custom path:
```shell-session
$ vault secrets enable -path=<custom_path> ldap
```
If enabled at a custom path, the `/ldap/` path segment in API paths must be replaced with
the custom path value.
### 2. migrate configuration
The AD secrets engine [configuration](/vault/api-docs/secret/ad#configuration)
will need to be migrated to an LDAP secrets engine [configuration](/vault/api-docs/secret/ldap#configuration-management).
The API paths and parameters will need to be considered during the migration.
#### API path
| AD Secrets Engine | LDAP Secrets Engine |
| ----------------- |-------------------- |
| [/ad/config](/vault/api-docs/secret/ad#configuration) | [/ldap/config](/vault/api-docs/secret/ad#configuration) |
#### Parameters
The parameters from existing AD secrets engine configurations can generally be mapped 1-to-1
to LDAP secrets engine configuration. The following LDAP secrets engine parameters are the
exception and must be considered during the migration.
| AD Secrets Engine | LDAP Secrets Engine | Details |
| ----------------- | ------------------- | ------- |
| N/A | [schema](/vault/api-docs/secret/ldap#schema) | Must be set to the `ad` option on the LDAP secrets engine configuration. |
| [userdn](/vault/api-docs/secret/ad#userdn) | [userdn](/vault/api-docs/secret/ad#userdn) | Required to be set if using the [library sets](#4-migrate-library-sets) check-out feature. It can be optionally set if using the [static roles](#3-migrate-roles) feature without providing a distinguished name ([dn](/vault/api-docs/secret/ldap#dn)). |
| [ttl](/vault/api-docs/secret/ad#ttl) | N/A | Replaced by static role [rotation_period](/vault/api-docs/secret/ldap#rotation_period). |
| [max_ttl](/vault/api-docs/secret/ad#max_ttl) | N/A | Not supported for [static roles](#3-migrate-roles). Can be set using [max_ttl](/vault/api-docs/secret/ldap#max_ttl-1) for library sets. |
| [last_rotation_tolerance](/vault/api-docs/secret/ad#last_rotation_tolerance) | N/A | Not supported by the LDAP secrets engine. Passwords will be rotated based on the static role [rotation_period](/vault/api-docs/secret/ldap#rotation_period). |
### 3. migrate roles
AD secrets engine [roles](/vault/api-docs/secret/ad#role-management) will need to be migrated
to LDAP secrets engine [static roles](/vault/api-docs/secret/ldap#static-roles). The API paths,
parameters, and rotation periods will need to be considered during the migration.
#### API path
| AD Secrets Engine | LDAP Secrets Engine |
| ----------------- | ------------------- |
| [/ad/roles/:role_name](/vault/api-docs/secret/ad#role-management) | [/ldap/static-role/:role_name](/vault/api-docs/secret/ldap#static-roles) |
#### Parameters
The following parameters must be migrated.
| AD Secrets Engine | LDAP Secrets Engine | Details |
| ----------------- | ------------------- | ------- |
| [ttl](/vault/api-docs/secret/ad#ttl-1) | [rotation_period](/vault/api-docs/secret/ldap#rotation_period) | N/A |
| [service_account_name](/vault/api-docs/secret/ad#service_account_name) | [username](/vault/api-docs/secret/ldap#username) | If `username` is set without setting the [dn](/vault/api-docs/secret/ldap#dn) value, then the configuration [userdn](/vault/api-docs/secret/ldap#userdn) must also be set. |
#### Rotation periods
Rotations that occur from AD secrets engine [roles](/vault/api-docs/secret/ad#role-management)
may conflict with rotations performed by LDAP secrets engine [static roles](/vault/api-docs/secret/ldap#static-roles)
during the migration process. This could cause applications consuming passwords to read a
password that gets invalidated by a rotation shortly after. To mitigate this, it's recommended
to set an initial [rotation_period](/vault/api-docs/secret/ldap#rotation_period) that provides
a large enough window to complete [application migrations](#5-migrate-applications) to minimize
the chance of this happening. Additionally, tuning the AD secrets engine [last_rotation_tolerance](/vault/api-docs/secret/ad#last_rotation_tolerance)
parameter could help mitigate applications reading stale passwords, since the parameter allows
rotation of the password if it's been rotated out-of-band within a given duration.
<Note title="Lazy rotation vs automatic rotation">
The AD secrets engine uses **lazy rotation** for passwords. With lazy
rotation, passwords rotate whenever the engine receives a request for a role
whose rotation period has elapsed.
The LDAP secret engine uses **automatic rotation** for passwords. With
automatic rotation, passwords are rotated as soon as the rotation period
elapses, without waiting for a client request.
When migrating to the LDAP secret engine, you may need to account for the
rotation changes in your clients. For example, if your client assumes the
password does not change until its next request to Vault and uses the password
to verify against other services.
</Note>
### 4. migrate library sets
AD secrets engine [library sets](/vault/api-docs/secret/ad#library-management) will need to
be migrated to LDAP secrets engine [library sets](/vault/api-docs/secret/ldap#library-set-management).
The API paths and parameters will need to be considered during the migration.
#### API path
| AD Secrets Engine | LDAP Secrets Engine |
| ----------------- | ------------------- |
| [/ad/library/:set_name](/vault/api-docs/secret/ad#library-management) | [/ldap/library/:set_name](/vault/api-docs/secret/ldap#library-set-management) |
#### Parameters
The parameters from existing AD secrets engine library sets can be exactly mapped 1-to-1
to LDAP secrets engine library sets. There are no exceptions to consider.
### 5. migrate applications
The AD secrets engine provides APIs to obtain credentials for AD users and service accounts.
Applications, or Vault clients, are typically the consumer of these credentials. For applications
to successfully migrate, they must begin using new API paths and response formats provided
by the LDAP secrets engine. Additionally, they must obtain a Vault [token](/vault/docs/concepts/tokens)
with an ACL [policy](/vault/docs/concepts/policies) that authorizes access to the new APIs.
The following section details credential-providing APIs and how their response formats differ
between the AD secrets engine and LDAP secrets engine.
#### API paths
| AD Secrets Engine | LDAP Secrets Engine | Details |
| ----------------- | ------------------- | ------- |
| [/ad/creds/:role_name](/vault/api-docs/secret/ad#retrieving-passwords) | [/ldap/static-cred/:role_name](/vault/api-docs/secret/ldap#static-role-passwords) | Response formats differ. Namely, `current_password` is now `password`. See [AD response](/vault/api-docs/secret/ad#sample-get-response) and [LDAP response](/vault/api-docs/secret/ldap#sample-get-response-1) for the difference. |
| [/ad/library/:set_name/check-out](/vault/api-docs/secret/ad#check-a-credential-out) | [/ldap/library/:set_name/check-out](/vault/api-docs/secret/ldap#check-out-management) | Response formats do not differ. |
| [/ad/library/:set_name/check-in](/vault/api-docs/secret/ad#check-a-credential-in) | [/ldap/library/:set_name/check-in](/vault/api-docs/secret/ldap#check-in-management) | Response formats do not differ. |
### 6. disable AD secrets engines
AD secrets engine mounts can be disabled after successful migration of configuration and
applications to the LDAP secrets engine. Note that disabling the secrets engine will erase
its configuration from storage. This cannot be reversed.
To disable the AD secrets engine:
```shell-session
$ vault secrets disable ad
```
To disable at a custom path:
```shell-session
$ vault secrets disable <custom_path>
``` | vault | layout docs page title Migration Guide Active Directory Secrets Engines description The guide for migrating from the Active Directory secrets engine to the LDAP secrets engine Migration guide active directory secrets engine The Vault Active Directory secrets engine vault docs secrets ad has been deprecated as of the Vault 1 13 release This document provides guidance for migrating from the Active Directory secrets engine to the LDAP secrets engine vault docs secrets ldap that was introduced in Vault 1 12 Deprecation timeline Beginning from the Vault 1 13 release we will continue to support the Active Directory AD secrets engine in maintenance mode for six major Vault releases Maintenance mode means that we will fix bugs and security issues but no new features will be added All new feature development efforts will go towards the unified LDAP secrets engine At Vault 1 18 we will mark the AD secrets engine as pending removal vault docs deprecation faq pending removal At this time Vault will begin to strongly signal operators that they need to migrate off of the AD secrets engine At Vault 1 19 we will mark the AD secrets engine as removed vault docs deprecation faq removed At this time the AD secrets engine will be removed from Vault Vault will not start up with the AD secrets engine mounts enabled Migration steps The following sections detail how to migrate the AD secrets engine configuration and applications consuming secrets to the new LDAP secrets engine 1 enable LDAP secrets engine The LDAP secrets engine needs to be enabled in order to have a target for migration of existing AD secrets engine mounts AD secrets engine mounts should be mapped 1 to 1 with new LDAP secrets engine mounts To enable the LDAP secrets engine shell session vault secrets enable ldap To enable at a custom path shell session vault secrets enable path custom path ldap If enabled at a custom path the ldap path segment in API paths must be replaced with the custom path value 2 migrate configuration The AD secrets engine configuration vault api docs secret ad configuration will need to be migrated to an LDAP secrets engine configuration vault api docs secret ldap configuration management The API paths and parameters will need to be considered during the migration API path AD Secrets Engine LDAP Secrets Engine ad config vault api docs secret ad configuration ldap config vault api docs secret ad configuration Parameters The parameters from existing AD secrets engine configurations can generally be mapped 1 to 1 to LDAP secrets engine configuration The following LDAP secrets engine parameters are the exception and must be considered during the migration AD Secrets Engine LDAP Secrets Engine Details N A schema vault api docs secret ldap schema Must be set to the ad option on the LDAP secrets engine configuration userdn vault api docs secret ad userdn userdn vault api docs secret ad userdn Required to be set if using the library sets 4 migrate library sets check out feature It can be optionally set if using the static roles 3 migrate roles feature without providing a distinguished name dn vault api docs secret ldap dn ttl vault api docs secret ad ttl N A Replaced by static role rotation period vault api docs secret ldap rotation period max ttl vault api docs secret ad max ttl N A Not supported for static roles 3 migrate roles Can be set using max ttl vault api docs secret ldap max ttl 1 for library sets last rotation tolerance vault api docs secret ad last rotation tolerance N A Not supported by the LDAP secrets engine Passwords will be rotated based on the static role rotation period vault api docs secret ldap rotation period 3 migrate roles AD secrets engine roles vault api docs secret ad role management will need to be migrated to LDAP secrets engine static roles vault api docs secret ldap static roles The API paths parameters and rotation periods will need to be considered during the migration API path AD Secrets Engine LDAP Secrets Engine ad roles role name vault api docs secret ad role management ldap static role role name vault api docs secret ldap static roles Parameters The following parameters must be migrated AD Secrets Engine LDAP Secrets Engine Details ttl vault api docs secret ad ttl 1 rotation period vault api docs secret ldap rotation period N A service account name vault api docs secret ad service account name username vault api docs secret ldap username If username is set without setting the dn vault api docs secret ldap dn value then the configuration userdn vault api docs secret ldap userdn must also be set Rotation periods Rotations that occur from AD secrets engine roles vault api docs secret ad role management may conflict with rotations performed by LDAP secrets engine static roles vault api docs secret ldap static roles during the migration process This could cause applications consuming passwords to read a password that gets invalidated by a rotation shortly after To mitigate this it s recommended to set an initial rotation period vault api docs secret ldap rotation period that provides a large enough window to complete application migrations 5 migrate applications to minimize the chance of this happening Additionally tuning the AD secrets engine last rotation tolerance vault api docs secret ad last rotation tolerance parameter could help mitigate applications reading stale passwords since the parameter allows rotation of the password if it s been rotated out of band within a given duration Note title Lazy rotation vs automatic rotation The AD secrets engine uses lazy rotation for passwords With lazy rotation passwords rotate whenever the engine receives a request for a role whose rotation period has elapsed The LDAP secret engine uses automatic rotation for passwords With automatic rotation passwords are rotated as soon as the rotation period elapses without waiting for a client request When migrating to the LDAP secret engine you may need to account for the rotation changes in your clients For example if your client assumes the password does not change until its next request to Vault and uses the password to verify against other services Note 4 migrate library sets AD secrets engine library sets vault api docs secret ad library management will need to be migrated to LDAP secrets engine library sets vault api docs secret ldap library set management The API paths and parameters will need to be considered during the migration API path AD Secrets Engine LDAP Secrets Engine ad library set name vault api docs secret ad library management ldap library set name vault api docs secret ldap library set management Parameters The parameters from existing AD secrets engine library sets can be exactly mapped 1 to 1 to LDAP secrets engine library sets There are no exceptions to consider 5 migrate applications The AD secrets engine provides APIs to obtain credentials for AD users and service accounts Applications or Vault clients are typically the consumer of these credentials For applications to successfully migrate they must begin using new API paths and response formats provided by the LDAP secrets engine Additionally they must obtain a Vault token vault docs concepts tokens with an ACL policy vault docs concepts policies that authorizes access to the new APIs The following section details credential providing APIs and how their response formats differ between the AD secrets engine and LDAP secrets engine API paths AD Secrets Engine LDAP Secrets Engine Details ad creds role name vault api docs secret ad retrieving passwords ldap static cred role name vault api docs secret ldap static role passwords Response formats differ Namely current password is now password See AD response vault api docs secret ad sample get response and LDAP response vault api docs secret ldap sample get response 1 for the difference ad library set name check out vault api docs secret ad check a credential out ldap library set name check out vault api docs secret ldap check out management Response formats do not differ ad library set name check in vault api docs secret ad check a credential in ldap library set name check in vault api docs secret ldap check in management Response formats do not differ 6 disable AD secrets engines AD secrets engine mounts can be disabled after successful migration of configuration and applications to the LDAP secrets engine Note that disabling the secrets engine will erase its configuration from storage This cannot be reversed To disable the AD secrets engine shell session vault secrets disable ad To disable at a custom path shell session vault secrets disable custom path |
vault page title OIDC Identity Provider layout docs Vault is an OpenID Connect OIDC https openid net specs openid connect core 1 0 html OIDC identity provider Setup and configuration for Vault as an OpenID Connect OIDC identity provider | ---
layout: docs
page_title: OIDC Identity Provider
description: >-
Setup and configuration for Vault as an OpenID Connect (OIDC) identity provider.
---
# OIDC identity provider
Vault is an OpenID Connect ([OIDC](https://openid.net/specs/openid-connect-core-1_0.html))
identity provider. This enables client applications that speak the OIDC protocol to leverage
Vault's source of [identity](/vault/docs/concepts/identity) and wide range of [authentication methods](/vault/docs/auth)
when authenticating end-users. Client applications can configure their authentication logic
to talk to Vault. Once enabled, Vault will act as the bridge to other identity providers via
its existing authentication methods. Client applications can also obtain identity information
for their end-users by leveraging custom templating of Vault identity information.
\-> **Note**: For more detailed information on the configuration resources and OIDC endpoints,
please visit the [OIDC provider](/vault/docs/concepts/oidc-provider) concepts page.
## Setup
The Vault OIDC provider system is built on top of the identity secrets engine.
This secrets engine is mounted by default and cannot be disabled or moved.
Each Vault namespace has a default OIDC [provider](/vault/docs/concepts/oidc-provider#oidc-providers)
and [key](/vault/docs/concepts/oidc-provider#keys). This built-in configuration enables client
applications to begin using Vault as a source of identity with minimal configuration. For
details on the built-in configuration and advanced options, see the [OIDC provider](/vault/docs/concepts/oidc-provider)
concepts page.
The following steps show a minimal configuration that allows a client application to use
Vault as an OIDC provider.
1. Enable a Vault auth method:
```text
$ vault auth enable userpass
Success! Enabled userpass auth method at: userpass/
```
Any Vault auth method may be used within the OIDC flow. For simplicity, enable the
`userpass` auth method.
2. Create a user:
```text
$ vault write auth/userpass/users/end-user password="securepassword"
Success! Data written to: auth/userpass/users/end-user
```
This user will authenticate to Vault through a client application, otherwise known as
an OIDC [relying party](https://openid.net/specs/openid-connect-core-1_0.html#Terminology).
2. Create a client application:
```text
$ vault write identity/oidc/client/my-webapp \
redirect_uris="https://localhost:9702/auth/oidc-callback" \
assignments="allow_all"
Success! Data written to: identity/oidc/client/my-webapp
```
This operation creates a client application which can be used to configure an OIDC
relying party. See the [client applications](/vault/docs/concepts/oidc-provider#client-applications)
section for details on different client types, including `confidential` and `public` clients.
The `assignments` parameter limits the Vault entities and groups that are allowed to
authenticate through the client application. By default, no Vault entities are allowed.
To allow all Vault entities to authenticate, the built-in [allow_all](/vault/docs/concepts/oidc-provider#assignments)
assignment is provided.
2. Read client credentials:
```text
$ vault read identity/oidc/client/my-webapp
Key Value
--- -----
access_token_ttl 24h
assignments [allow_all]
client_id GSDTnn3KaOrLpNlVGlYLS9TVsZgOTweO
client_secret hvo_secret_gBKHcTP58C4aq7FqPWsuqKgpiiegd7ahpifGae9WGkHRCwFEJTZA9KGdNVpzE0r8
client_type confidential
id_token_ttl 24h
key default
redirect_uris [https://localhost:9702/auth/oidc-callback]
```
The `client_id` and `client_secret` are the client application's credentials. These
values are typically required when configuring an OIDC relying party.
2. Read OIDC discovery configuration:
```text
$ curl -s http://127.0.0.1:8200/v1/identity/oidc/provider/default/.well-known/openid-configuration
{
"issuer": "http://127.0.0.1:8200/v1/identity/oidc/provider/default",
"jwks_uri": "http://127.0.0.1:8200/v1/identity/oidc/provider/default/.well-known/keys",
"authorization_endpoint": "http://127.0.0.1:8200/ui/vault/identity/oidc/provider/default/authorize",
"token_endpoint": "http://127.0.0.1:8200/v1/identity/oidc/provider/default/token",
"userinfo_endpoint": "http://127.0.0.1:8200/v1/identity/oidc/provider/default/userinfo",
"request_parameter_supported": false,
"request_uri_parameter_supported": false,
"id_token_signing_alg_values_supported": [
"RS256",
"RS384",
"RS512",
"ES256",
"ES384",
"ES512",
"EdDSA"
],
"response_types_supported": [
"code"
],
"scopes_supported": [
"openid"
],
"subject_types_supported": [
"public"
],
"grant_types_supported": [
"authorization_code"
],
"token_endpoint_auth_methods_supported": [
"none",
"client_secret_basic",
"client_secret_post"
],
"code_challenge_methods_supported": [
"plain",
"S256"
]
}
```
Each Vault OIDC provider publishes [discovery metadata](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata).
The `issuer` value is typically required when configuring an OIDC relying party.
## Usage
After configuring a Vault auth method and client application, the following details can
be used to configure an OIDC relying party to delegate end-user authentication to Vault.
- `client_id` - The ID of the client application
- `client_secret` - The secret of the client application
- `issuer` - The issuer of the Vault OIDC provider
A number of HashiCorp products provide OIDC authentication methods. This means that they
can leverage Vault as a source of identity using the OIDC protocol. See the following links
for details on configuring OIDC authentication for other HashiCorp products:
- [Boundary](/boundary/tutorials/access-management/oidc-auth)
- [Consul](/consul/docs/security/acl/auth-methods/oidc)
- [Waypoint](/waypoint/docs/server/auth/oidc)
- [Nomad](/nomad/tutorials/single-sign-on/sso-oidc-vault)
Otherwise, refer to the documentation of the specific OIDC relying party for usage details.
## Supported flows
The Vault OIDC provider feature currently supports the following authentication flow:
- [Authorization Code Flow](https://openid.net/specs/openid-connect-core-1_0.html#CodeFlowAuth).
## Tutorial
Refer to the [Vault as an OIDC Identity Provider](/vault/tutorials/auth-methods/oidc-identity-provider)
tutorial to learn how to configure a HashiCorp [Boundary](https://www.boundaryproject.io/)
to leverage Vault as a source of identity using the OIDC protocol.
## API
The Vault OIDC provider feature has a full HTTP API. Please see the
[OIDC identity provider API](/vault/api-docs/secret/identity/oidc-provider) for more
details. | vault | layout docs page title OIDC Identity Provider description Setup and configuration for Vault as an OpenID Connect OIDC identity provider OIDC identity provider Vault is an OpenID Connect OIDC https openid net specs openid connect core 1 0 html identity provider This enables client applications that speak the OIDC protocol to leverage Vault s source of identity vault docs concepts identity and wide range of authentication methods vault docs auth when authenticating end users Client applications can configure their authentication logic to talk to Vault Once enabled Vault will act as the bridge to other identity providers via its existing authentication methods Client applications can also obtain identity information for their end users by leveraging custom templating of Vault identity information Note For more detailed information on the configuration resources and OIDC endpoints please visit the OIDC provider vault docs concepts oidc provider concepts page Setup The Vault OIDC provider system is built on top of the identity secrets engine This secrets engine is mounted by default and cannot be disabled or moved Each Vault namespace has a default OIDC provider vault docs concepts oidc provider oidc providers and key vault docs concepts oidc provider keys This built in configuration enables client applications to begin using Vault as a source of identity with minimal configuration For details on the built in configuration and advanced options see the OIDC provider vault docs concepts oidc provider concepts page The following steps show a minimal configuration that allows a client application to use Vault as an OIDC provider 1 Enable a Vault auth method text vault auth enable userpass Success Enabled userpass auth method at userpass Any Vault auth method may be used within the OIDC flow For simplicity enable the userpass auth method 2 Create a user text vault write auth userpass users end user password securepassword Success Data written to auth userpass users end user This user will authenticate to Vault through a client application otherwise known as an OIDC relying party https openid net specs openid connect core 1 0 html Terminology 2 Create a client application text vault write identity oidc client my webapp redirect uris https localhost 9702 auth oidc callback assignments allow all Success Data written to identity oidc client my webapp This operation creates a client application which can be used to configure an OIDC relying party See the client applications vault docs concepts oidc provider client applications section for details on different client types including confidential and public clients The assignments parameter limits the Vault entities and groups that are allowed to authenticate through the client application By default no Vault entities are allowed To allow all Vault entities to authenticate the built in allow all vault docs concepts oidc provider assignments assignment is provided 2 Read client credentials text vault read identity oidc client my webapp Key Value access token ttl 24h assignments allow all client id GSDTnn3KaOrLpNlVGlYLS9TVsZgOTweO client secret hvo secret gBKHcTP58C4aq7FqPWsuqKgpiiegd7ahpifGae9WGkHRCwFEJTZA9KGdNVpzE0r8 client type confidential id token ttl 24h key default redirect uris https localhost 9702 auth oidc callback The client id and client secret are the client application s credentials These values are typically required when configuring an OIDC relying party 2 Read OIDC discovery configuration text curl s http 127 0 0 1 8200 v1 identity oidc provider default well known openid configuration issuer http 127 0 0 1 8200 v1 identity oidc provider default jwks uri http 127 0 0 1 8200 v1 identity oidc provider default well known keys authorization endpoint http 127 0 0 1 8200 ui vault identity oidc provider default authorize token endpoint http 127 0 0 1 8200 v1 identity oidc provider default token userinfo endpoint http 127 0 0 1 8200 v1 identity oidc provider default userinfo request parameter supported false request uri parameter supported false id token signing alg values supported RS256 RS384 RS512 ES256 ES384 ES512 EdDSA response types supported code scopes supported openid subject types supported public grant types supported authorization code token endpoint auth methods supported none client secret basic client secret post code challenge methods supported plain S256 Each Vault OIDC provider publishes discovery metadata https openid net specs openid connect discovery 1 0 html ProviderMetadata The issuer value is typically required when configuring an OIDC relying party Usage After configuring a Vault auth method and client application the following details can be used to configure an OIDC relying party to delegate end user authentication to Vault client id The ID of the client application client secret The secret of the client application issuer The issuer of the Vault OIDC provider A number of HashiCorp products provide OIDC authentication methods This means that they can leverage Vault as a source of identity using the OIDC protocol See the following links for details on configuring OIDC authentication for other HashiCorp products Boundary boundary tutorials access management oidc auth Consul consul docs security acl auth methods oidc Waypoint waypoint docs server auth oidc Nomad nomad tutorials single sign on sso oidc vault Otherwise refer to the documentation of the specific OIDC relying party for usage details Supported flows The Vault OIDC provider feature currently supports the following authentication flow Authorization Code Flow https openid net specs openid connect core 1 0 html CodeFlowAuth Tutorial Refer to the Vault as an OIDC Identity Provider vault tutorials auth methods oidc identity provider tutorial to learn how to configure a HashiCorp Boundary https www boundaryproject io to leverage Vault as a source of identity using the OIDC protocol API The Vault OIDC provider feature has a full HTTP API Please see the OIDC identity provider API vault api docs secret identity oidc provider for more details |
vault page title Identity Tokens Details and best practices for identity tokens Introduction layout docs Identity tokens | ---
layout: docs
page_title: Identity Tokens
description: Details and best practices for identity tokens.
---
# Identity tokens
## Introduction
Identity information is used throughout Vault, but it can also be exported for
use by other applications. An authorized user/application can request a token
that encapsulates identity information for their associated entity. These
tokens are signed JWTs following the [OIDC ID
token](https://openid.net/specs/openid-connect-core-1_0.html#IDToken) structure.
The public keys used to authenticate the tokens are published by Vault on an
unauthenticated endpoint following OIDC discovery and JWKS conventions, which
should be directly usable by JWT/OIDC libraries. An introspection endpoint is
also provided by Vault for token verification.
### Roles and keys
OIDC-compliant ID tokens are generated against a role which allows configuration
of token claims via a templating system, token ttl, and a way to specify which
"key" will be used to sign the token. The role template is an optional parameter
to customize the token contents and is described in the next section. Token TTL
controls the expiration time of the token, after which verification libraries will
consider the token invalid. All roles have an associated `client_id` that will be
added to the token's `aud` parameter. JWT/OIDC libraries will usually require this
value. The parameter may be set by the operator to a chosen value, or a
Vault-generated value will be used if left unconfigured.
A role's `key` parameter links a role to an existing named key (multiple roles
may refer to the same key). It is not possible to generate an unsigned ID token.
A named key is a public/private key pair generated by Vault. The private key is
used to sign the identity tokens, and the public key is used by clients to
verify the signature. Keys are regularly rotated, whereby a new key pair is
generated and the previous _public_ key is retained for a limited time for
verification purposes.
A named key's configuration specifies a rotation period, a verification ttl,
signing algorithm and allowed client IDs. Rotation period specifies the
frequency at which a new signing key is generated and the private portion of the
previous signing key is deleted. Verification ttl is the time a public key is
retained for verification after being rotated. By default, keys are rotated
every 24 hours, and continue to be available for verification for 24 hours after
their rotation.
A key's list of allowed client IDs limits which roles may reference the key. The
parameter may be set to `*` to allow all roles. The validity evaluation is made
when a token is requested, not during configuration.
### Token contents and templates
Identity tokens will always contain, at a minimum, the claims required by OIDC:
- `iss` - Issuer URL
- `sub` - Requester's entity ID
- `aud` - `client_id` for the role
- `iat` - Time of issue
- `exp` - Expiration time for the token
In addition, the operator may configure per-role templates that allow a variety
of other entity information to be added to the token. The templates are
structured as JSON with replaceable parameters. The parameter syntax is the same
as that used for [ACL Path Templating](/vault/docs/concepts/policies).
For example:
```jsx
{
"color": ,
"userinfo": {
"username": ,
"groups":
},
"nbf":
}
```
When a token is requested, the resulting template might be populated as:
```json
{
"color": "green",
"userinfo": {
"username": "bob",
"groups": ["web", "engr", "default"]
},
"nbf": 1561411915
}
```
which would be merged with the base OIDC claims into the final token:
```json
{
"iss": "https://10.1.1.45:8200/v1/identity/oidc",
"sub": "a2cd63d3-5364-406f-980e-8d71bb0692f5",
"aud": "SxSouteCYPBoaTFy94hFghmekos",
"iat": 1561411915,
"exp": 1561412215,
"color": "green",
"userinfo": {
"username": "bob",
"groups": ["web", "engr", "default"]
},
"nbf": 1561411915
}
```
Note how the template is merged, with top level template keys becoming top level
token keys. For this reason, templates may not contain top level keys that
overwrite the standard OIDC claims.
Template parameters that are not present for an entity, such as a metadata that
isn't present, or an alias accessor which doesn't exist, are simply empty
strings or objects, depending on the data type.
Templates are configured on the role and may be optionally encoded as base64.
The full list of template parameters is shown below:
| Name | Description |
| :------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------- |
| `identity.entity.id` | The entity's ID |
| `identity.entity.name` | The entity's name |
| `identity.entity.groups.ids` | The IDs of the groups the entity is a member of |
| `identity.entity.groups.names` | The names of the groups the entity is a member of |
| `identity.entity.metadata` | Metadata associated with the entity |
| `identity.entity.metadata.<metadata key>` | Metadata associated with the entity for the given key |
| `identity.entity.aliases.<mount accessor>.id` | Entity alias ID for the given mount |
| `identity.entity.aliases.<mount accessor>.name` | Entity alias name for the given mount |
| `identity.entity.aliases.<mount accessor>.metadata` | Metadata associated with the alias for the given mount |
| `identity.entity.aliases.<mount accessor>.metadata.<metadata key>` | Metadata associated with the alias for the given mount and metadata key |
| `identity.entity.aliases.<mount accessor>.custom_metadata` | Custom metadata associated with the alias for the given mount |
| `identity.entity.aliases.<mount accessor>.custom_metadata.<custom_metadata key>` | Custom metadata associated with the alias for the given mount and custom metadata key |
| `time.now` | Current time as integral seconds since the Epoch |
| `time.now.plus.<duration>` | Current time plus a [duration format string](/vault/docs/concepts/duration-format) |
| `time.now.minus.<duration>` | Current time minus a [duration format string](/vault/docs/concepts/duration-format) |
### Token generation
An authenticated client may request a token using the [token generation
endpoint](/vault/api-docs/secret/identity/tokens#generate-a-signed-id-token). The token
will be generated per the requested role's specifications, for the requester's
entity. It is not possible to generate tokens for an arbitrary entity.
### Verifying authenticity of ID tokens generated by Vault
An identity token may be verified by the client party using the public keys
published by Vault, or via a Vault-provided introspection endpoint.
Vault will serve standard "[.well-known](https://tools.ietf.org/html/rfc5785)"
endpoints that allow easy integration with OIDC verification libraries.
Configuring the libraries will typically involve providing an issuer URL and
client ID. The library will then handle key requests and can validate the
signature and claims requirements on tokens. This approach has the advantage of
only requiring _access_ to Vault, not _authorization_, as the .well-known
endpoints are unauthenticated.
Alternatively, the token may be sent to Vault for verification via an
[introspection endpoint](/vault/api-docs/secret/identity/tokens#introspect-a-signed-id-token).
The response will indicate whether the token is "active" or not, as well as any
errors that occurred during validation. Beyond simply allowing the client to
delegate verification to Vault, using this endpoint incorporates the additional
check of whether the entity is still active or not, which is something that
cannot be determined from the token alone. Unlike the .well-known endpoint, accessing the
introspection endpoint does require a valid Vault token and sufficient
authorization.
### Issuer considerations
The identity token system has one configurable parameter: issuer. The issuer
`iss` claim is particularly important for proper validation of the token by
clients, and special consideration should be given when using Identity Tokens
with [performance replication](/vault/docs/enterprise/replication).
Consumers of the token will request public keys from Vault using the issuer URL,
so it must be network reachable. Furthermore, the returned set of keys will include
an issuer that must match the request.
By default Vault will set the issuer to the Vault instance's
[`api_addr`](/vault/docs/configuration#api_addr). This means that tokens
issued in a given cluster should be validated within that same cluster.
Alternatively, the [`issuer`](/vault/api-docs/secret/identity/tokens#issuer) parameter
may be configured explicitly. This address must point to the identity/oidc path
for the Vault instance (e.g.
`https://vault-1.example.com:8200/v1/identity/oidc`) and should be
reachable by any client trying to validate identity tokens.
## API
The Identity secrets engine has a full HTTP API. Please see the
[Identity secrets engine API](/vault/api-docs/secret/identity) for more
details. | vault | layout docs page title Identity Tokens description Details and best practices for identity tokens Identity tokens Introduction Identity information is used throughout Vault but it can also be exported for use by other applications An authorized user application can request a token that encapsulates identity information for their associated entity These tokens are signed JWTs following the OIDC ID token https openid net specs openid connect core 1 0 html IDToken structure The public keys used to authenticate the tokens are published by Vault on an unauthenticated endpoint following OIDC discovery and JWKS conventions which should be directly usable by JWT OIDC libraries An introspection endpoint is also provided by Vault for token verification Roles and keys OIDC compliant ID tokens are generated against a role which allows configuration of token claims via a templating system token ttl and a way to specify which key will be used to sign the token The role template is an optional parameter to customize the token contents and is described in the next section Token TTL controls the expiration time of the token after which verification libraries will consider the token invalid All roles have an associated client id that will be added to the token s aud parameter JWT OIDC libraries will usually require this value The parameter may be set by the operator to a chosen value or a Vault generated value will be used if left unconfigured A role s key parameter links a role to an existing named key multiple roles may refer to the same key It is not possible to generate an unsigned ID token A named key is a public private key pair generated by Vault The private key is used to sign the identity tokens and the public key is used by clients to verify the signature Keys are regularly rotated whereby a new key pair is generated and the previous public key is retained for a limited time for verification purposes A named key s configuration specifies a rotation period a verification ttl signing algorithm and allowed client IDs Rotation period specifies the frequency at which a new signing key is generated and the private portion of the previous signing key is deleted Verification ttl is the time a public key is retained for verification after being rotated By default keys are rotated every 24 hours and continue to be available for verification for 24 hours after their rotation A key s list of allowed client IDs limits which roles may reference the key The parameter may be set to to allow all roles The validity evaluation is made when a token is requested not during configuration Token contents and templates Identity tokens will always contain at a minimum the claims required by OIDC iss Issuer URL sub Requester s entity ID aud client id for the role iat Time of issue exp Expiration time for the token In addition the operator may configure per role templates that allow a variety of other entity information to be added to the token The templates are structured as JSON with replaceable parameters The parameter syntax is the same as that used for ACL Path Templating vault docs concepts policies For example jsx color userinfo username groups nbf When a token is requested the resulting template might be populated as json color green userinfo username bob groups web engr default nbf 1561411915 which would be merged with the base OIDC claims into the final token json iss https 10 1 1 45 8200 v1 identity oidc sub a2cd63d3 5364 406f 980e 8d71bb0692f5 aud SxSouteCYPBoaTFy94hFghmekos iat 1561411915 exp 1561412215 color green userinfo username bob groups web engr default nbf 1561411915 Note how the template is merged with top level template keys becoming top level token keys For this reason templates may not contain top level keys that overwrite the standard OIDC claims Template parameters that are not present for an entity such as a metadata that isn t present or an alias accessor which doesn t exist are simply empty strings or objects depending on the data type Templates are configured on the role and may be optionally encoded as base64 The full list of template parameters is shown below Name Description identity entity id The entity s ID identity entity name The entity s name identity entity groups ids The IDs of the groups the entity is a member of identity entity groups names The names of the groups the entity is a member of identity entity metadata Metadata associated with the entity identity entity metadata metadata key Metadata associated with the entity for the given key identity entity aliases mount accessor id Entity alias ID for the given mount identity entity aliases mount accessor name Entity alias name for the given mount identity entity aliases mount accessor metadata Metadata associated with the alias for the given mount identity entity aliases mount accessor metadata metadata key Metadata associated with the alias for the given mount and metadata key identity entity aliases mount accessor custom metadata Custom metadata associated with the alias for the given mount identity entity aliases mount accessor custom metadata custom metadata key Custom metadata associated with the alias for the given mount and custom metadata key time now Current time as integral seconds since the Epoch time now plus duration Current time plus a duration format string vault docs concepts duration format time now minus duration Current time minus a duration format string vault docs concepts duration format Token generation An authenticated client may request a token using the token generation endpoint vault api docs secret identity tokens generate a signed id token The token will be generated per the requested role s specifications for the requester s entity It is not possible to generate tokens for an arbitrary entity Verifying authenticity of ID tokens generated by Vault An identity token may be verified by the client party using the public keys published by Vault or via a Vault provided introspection endpoint Vault will serve standard well known https tools ietf org html rfc5785 endpoints that allow easy integration with OIDC verification libraries Configuring the libraries will typically involve providing an issuer URL and client ID The library will then handle key requests and can validate the signature and claims requirements on tokens This approach has the advantage of only requiring access to Vault not authorization as the well known endpoints are unauthenticated Alternatively the token may be sent to Vault for verification via an introspection endpoint vault api docs secret identity tokens introspect a signed id token The response will indicate whether the token is active or not as well as any errors that occurred during validation Beyond simply allowing the client to delegate verification to Vault using this endpoint incorporates the additional check of whether the entity is still active or not which is something that cannot be determined from the token alone Unlike the well known endpoint accessing the introspection endpoint does require a valid Vault token and sufficient authorization Issuer considerations The identity token system has one configurable parameter issuer The issuer iss claim is particularly important for proper validation of the token by clients and special consideration should be given when using Identity Tokens with performance replication vault docs enterprise replication Consumers of the token will request public keys from Vault using the issuer URL so it must be network reachable Furthermore the returned set of keys will include an issuer that must match the request By default Vault will set the issuer to the Vault instance s api addr vault docs configuration api addr This means that tokens issued in a given cluster should be validated within that same cluster Alternatively the issuer vault api docs secret identity tokens issuer parameter may be configured explicitly This address must point to the identity oidc path for the Vault instance e g https vault 1 example com 8200 v1 identity oidc and should be reachable by any client trying to validate identity tokens API The Identity secrets engine has a full HTTP API Please see the Identity secrets engine API vault api docs secret identity for more details |
vault Transit secrets engine layout docs page title Transit Secrets Engines doesn t store any secrets The transit secrets engine for Vault encrypts decrypts data in transit It | ---
layout: docs
page_title: Transit - Secrets Engines
description: >-
The transit secrets engine for Vault encrypts/decrypts data in-transit. It
doesn't store any secrets.
---
# Transit secrets engine
The transit secrets engine handles cryptographic functions on data in-transit.
Vault doesn't store the data sent to the secrets engine. It can also be viewed
as "cryptography as a service" or "encryption as a service". The transit secrets
engine can also sign and verify data; generate hashes and HMACs of data; and act
as a source of random bytes.
The primary use case for `transit` is to encrypt data from applications while
still storing that encrypted data in some primary data store. This relieves the
burden of proper encryption/decryption from application developers and pushes
the burden onto the operators of Vault.
Key derivation is supported, which allows the same key to be used for multiple
purposes by deriving a new key based on a user-supplied context value. In this
mode, convergent encryption can optionally be supported, which allows the same
input values to produce the same ciphertext.
Datakey generation allows processes to request a high-entropy key of a given
bit length be returned to them, encrypted with the named key. Normally this will
also return the key in plaintext to allow for immediate use, but this can be
disabled to accommodate auditing requirements.
## Working set management
The Transit engine supports versioning of keys. Key versions that are earlier
than a key's specified `min_decryption_version` gets archived, and the rest of
the key versions belong to the working set. This is a performance consideration
to keep key loading fast, as well as a security consideration: by disallowing
decryption of old versions of keys, found ciphertext corresponding to obsolete
(but sensitive) data can not be decrypted by most users, but in an emergency
the `min_decryption_version` can be moved back to allow for legitimate
decryption.
Currently this archive is stored in a single storage entry. With some storage
backends, notably those using Raft or Paxos for HA capabilities, frequent
rotation may lead to a storage entry size for the archive that is larger than
the storage backend can handle. For frequent rotation needs, using named keys
that correspond to time bounds (e.g. five-minute periods floored to the closest
multiple of five) may provide a good alternative, allowing for several keys to
be live at once and a deterministic way to decide which key to use at any given
time.
## NIST rotation guidance
Periodic rotation of the encryption keys is recommended, even in the absence of
compromise. For AES-GCM keys, rotation should occur before approximately 2<sup>32</sup>
encryptions have been performed by a key version, following the guidelines of NIST
publication 800-38D. It is recommended that operators estimate the
encryption rate of a key and use that to determine a frequency of rotation
that prevents the guidance limits from being reached. For example, if one determines
that the estimated rate is 40 million operations per day, then rotating a key every
three months is sufficient.
## Key types
As of now, the transit secrets engine supports the following key types (all key
types also generate separate HMAC keys):
- `aes128-gcm96`: AES-GCM with a 128-bit AES key and a 96-bit nonce; supports
encryption, decryption, key derivation, and convergent encryption
- `aes256-gcm96`: AES-GCM with a 256-bit AES key and a 96-bit nonce; supports
encryption, decryption, key derivation, and convergent encryption (default)
- `chacha20-poly1305`: ChaCha20-Poly1305 with a 256-bit key; supports
encryption, decryption, key derivation, and convergent encryption
- `ed25519`: Ed25519; supports signing, signature verification, and key
derivation
- `ecdsa-p256`: ECDSA using curve P-256; supports signing and signature
verification
- `ecdsa-p384`: ECDSA using curve P-384; supports signing and signature
verification
- `ecdsa-p521`: ECDSA using curve P-521; supports signing and signature
verification
- `rsa-2048`: 2048-bit RSA key; supports encryption, decryption, signing, and
signature verification
- `rsa-3072`: 3072-bit RSA key; supports encryption, decryption, signing, and
signature verification
- `rsa-4096`: 4096-bit RSA key; supports encryption, decryption, signing, and
signature verification
- `hmac`: HMAC; supporting HMAC generation and verification.
- `managed_key`: Managed key; supports a variety of operations depending on the
backing key management solution. See [Managed Keys](/vault/docs/enterprise/managed-keys)
for more information. <EnterpriseAlert inline="true" />
- `aes128-cmac`: CMAC with a 128-bit AES key; supporting CMAC generation and verification. <EnterpriseAlert inline="true" />
- `aes256-cmac`: CMAC with a 256-bit AES key; supporting CMAC generation and verification. <EnterpriseAlert inline="true" />
~> **Note**: In FIPS 140-2 mode, the following algorithms are not certified
and thus should not be used: `chacha20-poly1305` and `ed25519`.
~> **Note**: All key types support HMAC operations through the use of a second randomly
generated key created key creation time or rotation. The HMAC key type only
supports HMAC, and behaves identically to other algorithms with
respect to the HMAC operations but supports key import. By default,
the HMAC key type uses a 256-bit key.
RSA operations use one of the following methods:
- OAEP (encrypt, decrypt), with SHA-256 hash function and MGF,
- PSS (sign, verify), with configurable hash function also used for MGF, and
- PKCS#1v1.5: (sign, verify), with configurable hash function.
## Convergent encryption
Convergent encryption is a mode where the same set of plaintext+context always
result in the same ciphertext. It does this by deriving a key using a key
derivation function but also by deterministically deriving a nonce. Because
these properties differ for any combination of plaintext and ciphertext over a
keyspace the size of 2^256, the risk of nonce reuse is near zero.
This has many practical uses. One common usage mode is to allow values to be stored
encrypted in a database, but with limited lookup/query support, so that rows
with the same value for a specific field can be returned from a query.
To accommodate for any needed upgrades to the algorithm, different versions of
convergent encryption have historically been supported:
- Version 1 required the client to provide their own nonce, which is highly
flexible but if done incorrectly can be dangerous. This was only in Vault
0.6.1, and keys using this version cannot be upgraded.
- Version 2 used an algorithmic approach to deriving the parameters. However,
the algorithm used was susceptible to offline plaintext-confirmation attacks,
which could allow attackers to brute force decryption if the plaintext size
was small. Keys using version 2 can be upgraded by simply performing a rotate
operation to a new key version; existing values can then be rewrapped against
the new key version and will use the version 3 algorithm.
- Version 3 uses a different algorithm designed to be resistant to offline
plaintext-confirmation attacks. It is similar to AES-SIV in that it uses a
PRF to generate the nonce from the plaintext.
## Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the Transit secrets engine:
```text
$ vault secrets enable transit
Success! Enabled the transit secrets engine at: transit/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Create a named encryption key:
```text
$ vault write -f transit/keys/my-key
Success! Data written to: transit/keys/my-key
```
Usually each application has its own encryption key.
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can use this secrets engine.
1. Encrypt some plaintext data using the `/encrypt` endpoint with a named key:
**NOTE:** All plaintext data **must be base64-encoded**. The reason for this
requirement is that Vault does not require that the plaintext is "text". It
could be a binary file such as a PDF or image. The easiest safe transport
mechanism for this data as part of a JSON payload is to base64-encode it.
```text
$ vault write transit/encrypt/my-key plaintext=$(echo "my secret data" | base64)
Key Value
--- -----
ciphertext vault:v1:8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w==
```
The returned ciphertext starts with `vault:v1:`. The first prefix (`vault`)
identifies that it has been wrapped by Vault. The `v1` indicates the key
version 1 was used to encrypt the plaintext; therefore, when you rotate
keys, Vault knows which version to use for decryption. The rest is a base64
concatenation of the initialization vector (IV) and ciphertext.
Note that Vault does not _store_ any of this data. The caller is responsible
for storing the encrypted ciphertext. When the caller wants the plaintext,
it must provide the ciphertext back to Vault to decrypt the value.
!> Vault HTTP API imposes a maximum request size of 32MB to prevent a denial
of service attack. This can be tuned per [`listener`
block](/vault/docs/configuration/listener/tcp) in the Vault server
configuration.
1. Decrypt a piece of data using the `/decrypt` endpoint with a named key:
```text
$ vault write transit/decrypt/my-key ciphertext=vault:v1:8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w==
Key Value
--- -----
plaintext bXkgc2VjcmV0IGRhdGEK
```
The resulting data is base64-encoded (see the note above for details on
why). Decode it to get the raw plaintext:
```text
$ base64 --decode <<< "bXkgc2VjcmV0IGRhdGEK"
my secret data
```
It is also possible to script this decryption using some clever shell
scripting in one command:
```text
$ vault write -field=plaintext transit/decrypt/my-key ciphertext=... | base64 --decode
my secret data
```
Using ACLs, it is possible to restrict using the transit secrets engine such
that trusted operators can manage the named keys, and applications can only
encrypt or decrypt using the named keys they need access to.
1. Rotate the underlying encryption key. This will generate a new encryption key
and add it to the keyring for the named key:
```text
$ vault write -f transit/keys/my-key/rotate
Success! Data written to: transit/keys/my-key/rotate
```
Future encryptions will use this new key. Old data can still be decrypted
due to the use of a key ring.
1. Upgrade already-encrypted data to a new key. Vault will decrypt the value
using the appropriate key in the keyring and then encrypt the resulting
plaintext with the newest key in the keyring.
```text
$ vault write transit/rewrap/my-key ciphertext=vault:v1:8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w==
Key Value
--- -----
ciphertext vault:v2:0VHTTBb2EyyNYHsa3XiXsvXOQSLKulH+NqS4eRZdtc2TwQCxqJ7PUipvqQ==
```
This process **does not** reveal the plaintext data. As such, a Vault policy
could grant almost an untrusted process the ability to "rewrap" encrypted
data, since the process would not be able to get access to the plaintext
data.
## Bring your own key (BYOK)
~> **Note:** Key import functionality supports cases in which there is a need to bring
in an existing key from an HSM or other outside system. It is more secure to
have Transit generate and manage a key within Vault.
### Via the Command Line
The Vault command line tool [includes a helper](/vault/docs/commands/transit/) to perform the steps described
in Manual below.
### Via the API
First, the wrapping key needs to be read from transit:
```text
$ vault read transit/wrapping_key
```
The wrapping key will be a 4096-bit RSA public key.
Then the wrapping key is used to create the ciphertext input for the `import` endpoint,
as described below. In the below, the target key refers to the key being imported.
### HSM
If the key is being imported from an HSM that supports PKCS#11, there are
two possible scenarios:
- If the HSM supports the CKM_RSA_AES_KEY_WRAP mechanism, that can be used to wrap the
target key using the wrapping key.
- Otherwise, two mechanisms can be combined to wrap the target key. First, a 256-bit AES key should
be generated and then used to wrap the target key using the CKM_AES_KEY_WRAP_KWP mechanism.
Then the AES key should be wrapped under the wrapping key using the CKM_RSA_PKCS_OAEP mechanism
using MGF1 and either SHA-1, SHA-224, SHA-256, SHA-384, or SHA-512.
The ciphertext is constructed by appending the wrapped target key to the wrapped AES key.
The ciphertext bytes should be base64-encoded.
### Manual process
If the target key is not stored in an HSM or KMS, the following steps can be used to construct
the ciphertext for the input of the `import` endpoint:
- Generate an ephemeral 256-bit AES key.
- Wrap the target key using the ephemeral AES key with AES-KWP.
~> Note: When wrapping a symmetric key (such as an AES or ChaCha20 key), wrap
the raw bytes of the key. For instance, with an AES 128-bit key, this'll be
a byte array 16 characters in length that will directly be wrapped without
base64 or other encodings.<br /><br />When wrapping an asymmetric key
(such as a RSA or ECDSA key), wrap the **PKCS8** encoded format of this
key, in raw DER/binary form. Do not apply PEM encoding to this blob prior
to encryption and do not base64 encode it.
- Wrap the AES key under the Vault wrapping key using RSAES-OAEP with MGF1 and
either SHA-1, SHA-224, SHA-256, SHA-384, or SHA-512.
- Delete the ephemeral AES key.
- Append the wrapped target key to the wrapped AES key.
- Base64 encode the result.
For more details about wrapping the key for import into transit, see the
[key wrapping guide](/vault/docs/secrets/transit/key-wrapping-guide).
## Tutorial
Refer to the [Encryption as a Service: Transit Secrets
Engine](/vault/tutorials/encryption-as-a-service/eaas-transit)
tutorial to learn how to use the transit secrets engine to handle cryptographic functions on data in-transit.
## API
The Transit secrets engine has a full HTTP API. Please see the
[Transit secrets engine API](/vault/api-docs/secret/transit) for more
details. | vault | layout docs page title Transit Secrets Engines description The transit secrets engine for Vault encrypts decrypts data in transit It doesn t store any secrets Transit secrets engine The transit secrets engine handles cryptographic functions on data in transit Vault doesn t store the data sent to the secrets engine It can also be viewed as cryptography as a service or encryption as a service The transit secrets engine can also sign and verify data generate hashes and HMACs of data and act as a source of random bytes The primary use case for transit is to encrypt data from applications while still storing that encrypted data in some primary data store This relieves the burden of proper encryption decryption from application developers and pushes the burden onto the operators of Vault Key derivation is supported which allows the same key to be used for multiple purposes by deriving a new key based on a user supplied context value In this mode convergent encryption can optionally be supported which allows the same input values to produce the same ciphertext Datakey generation allows processes to request a high entropy key of a given bit length be returned to them encrypted with the named key Normally this will also return the key in plaintext to allow for immediate use but this can be disabled to accommodate auditing requirements Working set management The Transit engine supports versioning of keys Key versions that are earlier than a key s specified min decryption version gets archived and the rest of the key versions belong to the working set This is a performance consideration to keep key loading fast as well as a security consideration by disallowing decryption of old versions of keys found ciphertext corresponding to obsolete but sensitive data can not be decrypted by most users but in an emergency the min decryption version can be moved back to allow for legitimate decryption Currently this archive is stored in a single storage entry With some storage backends notably those using Raft or Paxos for HA capabilities frequent rotation may lead to a storage entry size for the archive that is larger than the storage backend can handle For frequent rotation needs using named keys that correspond to time bounds e g five minute periods floored to the closest multiple of five may provide a good alternative allowing for several keys to be live at once and a deterministic way to decide which key to use at any given time NIST rotation guidance Periodic rotation of the encryption keys is recommended even in the absence of compromise For AES GCM keys rotation should occur before approximately 2 sup 32 sup encryptions have been performed by a key version following the guidelines of NIST publication 800 38D It is recommended that operators estimate the encryption rate of a key and use that to determine a frequency of rotation that prevents the guidance limits from being reached For example if one determines that the estimated rate is 40 million operations per day then rotating a key every three months is sufficient Key types As of now the transit secrets engine supports the following key types all key types also generate separate HMAC keys aes128 gcm96 AES GCM with a 128 bit AES key and a 96 bit nonce supports encryption decryption key derivation and convergent encryption aes256 gcm96 AES GCM with a 256 bit AES key and a 96 bit nonce supports encryption decryption key derivation and convergent encryption default chacha20 poly1305 ChaCha20 Poly1305 with a 256 bit key supports encryption decryption key derivation and convergent encryption ed25519 Ed25519 supports signing signature verification and key derivation ecdsa p256 ECDSA using curve P 256 supports signing and signature verification ecdsa p384 ECDSA using curve P 384 supports signing and signature verification ecdsa p521 ECDSA using curve P 521 supports signing and signature verification rsa 2048 2048 bit RSA key supports encryption decryption signing and signature verification rsa 3072 3072 bit RSA key supports encryption decryption signing and signature verification rsa 4096 4096 bit RSA key supports encryption decryption signing and signature verification hmac HMAC supporting HMAC generation and verification managed key Managed key supports a variety of operations depending on the backing key management solution See Managed Keys vault docs enterprise managed keys for more information EnterpriseAlert inline true aes128 cmac CMAC with a 128 bit AES key supporting CMAC generation and verification EnterpriseAlert inline true aes256 cmac CMAC with a 256 bit AES key supporting CMAC generation and verification EnterpriseAlert inline true Note In FIPS 140 2 mode the following algorithms are not certified and thus should not be used chacha20 poly1305 and ed25519 Note All key types support HMAC operations through the use of a second randomly generated key created key creation time or rotation The HMAC key type only supports HMAC and behaves identically to other algorithms with respect to the HMAC operations but supports key import By default the HMAC key type uses a 256 bit key RSA operations use one of the following methods OAEP encrypt decrypt with SHA 256 hash function and MGF PSS sign verify with configurable hash function also used for MGF and PKCS 1v1 5 sign verify with configurable hash function Convergent encryption Convergent encryption is a mode where the same set of plaintext context always result in the same ciphertext It does this by deriving a key using a key derivation function but also by deterministically deriving a nonce Because these properties differ for any combination of plaintext and ciphertext over a keyspace the size of 2 256 the risk of nonce reuse is near zero This has many practical uses One common usage mode is to allow values to be stored encrypted in a database but with limited lookup query support so that rows with the same value for a specific field can be returned from a query To accommodate for any needed upgrades to the algorithm different versions of convergent encryption have historically been supported Version 1 required the client to provide their own nonce which is highly flexible but if done incorrectly can be dangerous This was only in Vault 0 6 1 and keys using this version cannot be upgraded Version 2 used an algorithmic approach to deriving the parameters However the algorithm used was susceptible to offline plaintext confirmation attacks which could allow attackers to brute force decryption if the plaintext size was small Keys using version 2 can be upgraded by simply performing a rotate operation to a new key version existing values can then be rewrapped against the new key version and will use the version 3 algorithm Version 3 uses a different algorithm designed to be resistant to offline plaintext confirmation attacks It is similar to AES SIV in that it uses a PRF to generate the nonce from the plaintext Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the Transit secrets engine text vault secrets enable transit Success Enabled the transit secrets engine at transit By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Create a named encryption key text vault write f transit keys my key Success Data written to transit keys my key Usually each application has its own encryption key Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can use this secrets engine 1 Encrypt some plaintext data using the encrypt endpoint with a named key NOTE All plaintext data must be base64 encoded The reason for this requirement is that Vault does not require that the plaintext is text It could be a binary file such as a PDF or image The easiest safe transport mechanism for this data as part of a JSON payload is to base64 encode it text vault write transit encrypt my key plaintext echo my secret data base64 Key Value ciphertext vault v1 8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w The returned ciphertext starts with vault v1 The first prefix vault identifies that it has been wrapped by Vault The v1 indicates the key version 1 was used to encrypt the plaintext therefore when you rotate keys Vault knows which version to use for decryption The rest is a base64 concatenation of the initialization vector IV and ciphertext Note that Vault does not store any of this data The caller is responsible for storing the encrypted ciphertext When the caller wants the plaintext it must provide the ciphertext back to Vault to decrypt the value Vault HTTP API imposes a maximum request size of 32MB to prevent a denial of service attack This can be tuned per listener block vault docs configuration listener tcp in the Vault server configuration 1 Decrypt a piece of data using the decrypt endpoint with a named key text vault write transit decrypt my key ciphertext vault v1 8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w Key Value plaintext bXkgc2VjcmV0IGRhdGEK The resulting data is base64 encoded see the note above for details on why Decode it to get the raw plaintext text base64 decode bXkgc2VjcmV0IGRhdGEK my secret data It is also possible to script this decryption using some clever shell scripting in one command text vault write field plaintext transit decrypt my key ciphertext base64 decode my secret data Using ACLs it is possible to restrict using the transit secrets engine such that trusted operators can manage the named keys and applications can only encrypt or decrypt using the named keys they need access to 1 Rotate the underlying encryption key This will generate a new encryption key and add it to the keyring for the named key text vault write f transit keys my key rotate Success Data written to transit keys my key rotate Future encryptions will use this new key Old data can still be decrypted due to the use of a key ring 1 Upgrade already encrypted data to a new key Vault will decrypt the value using the appropriate key in the keyring and then encrypt the resulting plaintext with the newest key in the keyring text vault write transit rewrap my key ciphertext vault v1 8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w Key Value ciphertext vault v2 0VHTTBb2EyyNYHsa3XiXsvXOQSLKulH NqS4eRZdtc2TwQCxqJ7PUipvqQ This process does not reveal the plaintext data As such a Vault policy could grant almost an untrusted process the ability to rewrap encrypted data since the process would not be able to get access to the plaintext data Bring your own key BYOK Note Key import functionality supports cases in which there is a need to bring in an existing key from an HSM or other outside system It is more secure to have Transit generate and manage a key within Vault Via the Command Line The Vault command line tool includes a helper vault docs commands transit to perform the steps described in Manual below Via the API First the wrapping key needs to be read from transit text vault read transit wrapping key The wrapping key will be a 4096 bit RSA public key Then the wrapping key is used to create the ciphertext input for the import endpoint as described below In the below the target key refers to the key being imported HSM If the key is being imported from an HSM that supports PKCS 11 there are two possible scenarios If the HSM supports the CKM RSA AES KEY WRAP mechanism that can be used to wrap the target key using the wrapping key Otherwise two mechanisms can be combined to wrap the target key First a 256 bit AES key should be generated and then used to wrap the target key using the CKM AES KEY WRAP KWP mechanism Then the AES key should be wrapped under the wrapping key using the CKM RSA PKCS OAEP mechanism using MGF1 and either SHA 1 SHA 224 SHA 256 SHA 384 or SHA 512 The ciphertext is constructed by appending the wrapped target key to the wrapped AES key The ciphertext bytes should be base64 encoded Manual process If the target key is not stored in an HSM or KMS the following steps can be used to construct the ciphertext for the input of the import endpoint Generate an ephemeral 256 bit AES key Wrap the target key using the ephemeral AES key with AES KWP Note When wrapping a symmetric key such as an AES or ChaCha20 key wrap the raw bytes of the key For instance with an AES 128 bit key this ll be a byte array 16 characters in length that will directly be wrapped without base64 or other encodings br br When wrapping an asymmetric key such as a RSA or ECDSA key wrap the PKCS8 encoded format of this key in raw DER binary form Do not apply PEM encoding to this blob prior to encryption and do not base64 encode it Wrap the AES key under the Vault wrapping key using RSAES OAEP with MGF1 and either SHA 1 SHA 224 SHA 256 SHA 384 or SHA 512 Delete the ephemeral AES key Append the wrapped target key to the wrapped AES key Base64 encode the result For more details about wrapping the key for import into transit see the key wrapping guide vault docs secrets transit key wrapping guide Tutorial Refer to the Encryption as a Service Transit Secrets Engine vault tutorials encryption as a service eaas transit tutorial to learn how to use the transit secrets engine to handle cryptographic functions on data in transit API The Transit secrets engine has a full HTTP API Please see the Transit secrets engine API vault api docs secret transit for more details |
vault page title Key Wrapping for Transit Key Import Transit Secrets Engines layout docs Details about wrapping keys for import into the transit secrets engine The bring your own key BYOK functionality for the transit Key wrapping for transit key import | ---
layout: docs
page_title: Key Wrapping for Transit Key Import - Transit - Secrets Engines
description: |-
Details about wrapping keys for import into the transit secrets engine.
---
# Key wrapping for transit key import
The "bring your own key" (BYOK) functionality for the transit
secrets engine allows users to import keys that were generated
outside of Vault into the transit secrets engine.
This document describes the process for wrapping an externally-generated
key (the target key) for import into Vault. It describes the processes
for importing a software-stored key using Golang and for importing a key
that is stored in an HSM.
### Mount the secrets engine
```shell-session
$ vault secrets enable transit
Success! Enabled the transit secrets engine at: transit/
```
### Retrieve the transit wrapping key
```shell-session
$ vault read transit/wrapping_key
```
This returns a 4096-bit RSA key.
The steps after this depend on whether the key is stored using
a software solution or in an HSM.
### Software example (Go)
This example assumes that the key is stored in software using the
variable name `key`. It demonstrates how to wrap the target key using
Golang crypto libraries.
Once you have the wrapping key, you can parse it using the `encoding/pem`
and `crypto/x509` libraries (the example code below assumes that the wrapping
key has been written to a variable called `wrappingKeyString`):
```
keyBlock, _ := pem.Decode([]byte(wrappingKeyString))
parsedKey, err := x509.ParsePKIXPublicKey(keyBlock.Bytes)
if err != nil {
return err
}
```
Then generate an ephemeral AES key for wrapping the target key.
This example uses Golang's `crypto/rand` library for generating the key:
```
ephemeralAESKey := make([]byte, 32)
_, err := rand.Read(ephemeralAESKey)
if err != nil {
return err
}
```
~> **NOTE**: Be sure to securely delete the ephemeral AES key once it
has been used!
Google's [tink library](https://pkg.go.dev/github.com/google/tink/[email protected]/kwp/subtle)
provides a function for performing the key wrap operation:
```
wrapKWP, err := subtle.NewKWP(aesKey)
if err != nil {
return err
}
wrappedTargetKey, err := wrapKWP.Wrap(key)
if err != nil {
return err
}
```
Then encrypt the ephemeral AES key using the transit wrapping key:
```
wrappedAESKey, err := rsa.EncryptOAEP(
sha256.New(),
rand.Reader,
wrappingKey,
ephemeralAESKey,
[]byte{},
)
if err != nil {
return err
}
```
Note that though this example uses SHA256, Vault also supports the use of
SHA1, SHA384, or SHA512. The hash function that was used at this step will
need to be provided as a parameter when importing the key.
Finally, concatenate the wrapped keys into a single byte string.
The leftmost 4096 bits of the string should be the wrapped AES key, and
the remaining bits should be the wrapped target key. Then the resulting
bytes should be base64-encoded.
```
combinedCiphertext := append(wrappedAESKey, wrappedTargetKey...)
base64Ciphertext := base64.StdEncoding.EncodeToString(combinedCiphertext)
```
This is the ciphertext that should be provided to Vault when importing a
key into the transit secrets engine.
```shell-session
$ vault write transit/keys/test-key/import ciphertext=$CIPHERTEXT hash_function=SHA256 type=$KEY_TYPE
```
### AWS CloudHSM example
This example demonstrates how to import a key into the transit secrets engine from
an AWS CloudHSM cluster. The process and mechanisms used will apply to importing
a key from an HSM in general, but the details will differ between HSMs.
For information on creating and communicating with an AWS CloudHSM cluster, see
the [Getting Started guide in the AWS CloudHSM documentation](https://docs.aws.amazon.com/cloudhsm/latest/userguide/getting-started.html).
Communication with the HSM uses AWS's `key_mgmt_util` tool. For help setting that
up, see the [Getting Started page for key_mgmt_util](https://docs.aws.amazon.com/cloudhsm/latest/userguide/key_mgmt_util-getting-started.html).
The first step is writing the transit wrapping key to the HSM. This involves
creating a new RSA public key object with the key returned by transit's
`wrapping_key` endpoint.
```shell-session
$ importPubKey -f wrapping_key.pem -l "vault-transit-wrapping-key"
```
This will create the public key in the HSM with all of the necessary permissions.
If you're using a different tool, make sure that the usage for the wrapping key
includes the attribute `CKA_WRAP`.
The next step is wrapping the target key using the wrapping key. If the
ID of the target key is `1` and the wrapping key is `2`, the command looks like this:
```shell-session
$ wrapKey -noheader -k 1 -w 2 -t 3 -m 7 -out ciphertext.key
```
The `-m 7` flag specifies the mechanism to use for the key wrapping. For AWS CloudHSM,
7 corresponds to the PKCS11 mechanism `CKM_AES_RSA_KEY_WRAP` ([see the AWS documentation for details](https://docs.aws.amazon.com/cloudhsm/latest/userguide/key_mgmt_util-wrapKey.html)).
The `-t 3` flag specifies `SHA256` as the hash function. The result is written to a
file called `ciphertext.key`. The `noheader` flag ensures that the ciphertext does
not include an AWS-specific header.
The output from this is a binary file, which needs to be base64-encoded when it
is provided to Vault.
```shell-session
$ export CIPHERTEXT=$(base64 ciphertext.key)
$ vault write transit/keys/test-key/import ciphertext=$CIPHERTEXT hash_function=SHA256 type=$KEY_TYPE
```
Once the key has been imported, it can be used like any other transit key. | vault | layout docs page title Key Wrapping for Transit Key Import Transit Secrets Engines description Details about wrapping keys for import into the transit secrets engine Key wrapping for transit key import The bring your own key BYOK functionality for the transit secrets engine allows users to import keys that were generated outside of Vault into the transit secrets engine This document describes the process for wrapping an externally generated key the target key for import into Vault It describes the processes for importing a software stored key using Golang and for importing a key that is stored in an HSM Mount the secrets engine shell session vault secrets enable transit Success Enabled the transit secrets engine at transit Retrieve the transit wrapping key shell session vault read transit wrapping key This returns a 4096 bit RSA key The steps after this depend on whether the key is stored using a software solution or in an HSM Software example Go This example assumes that the key is stored in software using the variable name key It demonstrates how to wrap the target key using Golang crypto libraries Once you have the wrapping key you can parse it using the encoding pem and crypto x509 libraries the example code below assumes that the wrapping key has been written to a variable called wrappingKeyString keyBlock pem Decode byte wrappingKeyString parsedKey err x509 ParsePKIXPublicKey keyBlock Bytes if err nil return err Then generate an ephemeral AES key for wrapping the target key This example uses Golang s crypto rand library for generating the key ephemeralAESKey make byte 32 err rand Read ephemeralAESKey if err nil return err NOTE Be sure to securely delete the ephemeral AES key once it has been used Google s tink library https pkg go dev github com google tink go v1 6 1 kwp subtle provides a function for performing the key wrap operation wrapKWP err subtle NewKWP aesKey if err nil return err wrappedTargetKey err wrapKWP Wrap key if err nil return err Then encrypt the ephemeral AES key using the transit wrapping key wrappedAESKey err rsa EncryptOAEP sha256 New rand Reader wrappingKey ephemeralAESKey byte if err nil return err Note that though this example uses SHA256 Vault also supports the use of SHA1 SHA384 or SHA512 The hash function that was used at this step will need to be provided as a parameter when importing the key Finally concatenate the wrapped keys into a single byte string The leftmost 4096 bits of the string should be the wrapped AES key and the remaining bits should be the wrapped target key Then the resulting bytes should be base64 encoded combinedCiphertext append wrappedAESKey wrappedTargetKey base64Ciphertext base64 StdEncoding EncodeToString combinedCiphertext This is the ciphertext that should be provided to Vault when importing a key into the transit secrets engine shell session vault write transit keys test key import ciphertext CIPHERTEXT hash function SHA256 type KEY TYPE AWS CloudHSM example This example demonstrates how to import a key into the transit secrets engine from an AWS CloudHSM cluster The process and mechanisms used will apply to importing a key from an HSM in general but the details will differ between HSMs For information on creating and communicating with an AWS CloudHSM cluster see the Getting Started guide in the AWS CloudHSM documentation https docs aws amazon com cloudhsm latest userguide getting started html Communication with the HSM uses AWS s key mgmt util tool For help setting that up see the Getting Started page for key mgmt util https docs aws amazon com cloudhsm latest userguide key mgmt util getting started html The first step is writing the transit wrapping key to the HSM This involves creating a new RSA public key object with the key returned by transit s wrapping key endpoint shell session importPubKey f wrapping key pem l vault transit wrapping key This will create the public key in the HSM with all of the necessary permissions If you re using a different tool make sure that the usage for the wrapping key includes the attribute CKA WRAP The next step is wrapping the target key using the wrapping key If the ID of the target key is 1 and the wrapping key is 2 the command looks like this shell session wrapKey noheader k 1 w 2 t 3 m 7 out ciphertext key The m 7 flag specifies the mechanism to use for the key wrapping For AWS CloudHSM 7 corresponds to the PKCS11 mechanism CKM AES RSA KEY WRAP see the AWS documentation for details https docs aws amazon com cloudhsm latest userguide key mgmt util wrapKey html The t 3 flag specifies SHA256 as the hash function The result is written to a file called ciphertext key The noheader flag ensures that the ciphertext does not include an AWS specific header The output from this is a binary file which needs to be base64 encoded when it is provided to Vault shell session export CIPHERTEXT base64 ciphertext key vault write transit keys test key import ciphertext CIPHERTEXT hash function SHA256 type KEY TYPE Once the key has been imported it can be used like any other transit key |
vault management of cryptographic keys in various key management service KMS providers page title Key Management Secrets Engines Key management secrets engine The key management secrets engine provides a consistent workflow for distribution and lifecycle layout docs | ---
layout: docs
page_title: Key Management - Secrets Engines
description: >-
The key management secrets engine provides a consistent workflow for distribution and lifecycle
management of cryptographic keys in various key management service (KMS) providers.
---
# Key management secrets engine
@include 'alerts/enterprise-and-hcp.mdx'
Key management secrets engine requires [Vault
Enterprise](https://www.hashicorp.com/products/vault/pricing) with the Advanced Data
Protection (ADP) module.
The key management secrets engine provides a consistent workflow for distribution and lifecycle
management of cryptographic keys in various key management service (KMS) providers. It allows
organizations to maintain centralized control of their keys in Vault while still taking advantage
of cryptographic capabilities native to the KMS providers.
The secrets engine generates and owns original copies of key material. When an operator decides
to distribute and manage the lifecycle of a key in one of the [supported KMS providers](#kms-providers),
a copy of the key material is distributed. This provides additional durability and disaster
recovery means for the complete lifecycle of the key in the KMS provider.
Key material will always be securely transferred in accordance with the
[key import specification](#kms-providers) of the supported KMS providers.
## Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the key management secrets engine:
```shell-session
$ vault secrets enable keymgmt
Success! Enabled the keymgmt secrets engine at: keymgmt/
```
By default, the secrets engine will mount at the name of the engine. To enable
the secrets engine at a different path, use the `-path` argument.
## Usage
After the secrets engine is mounted and a user/machine has a Vault token with
the proper permission, it can use this secrets engine to generate, distribute, and
manage the lifecycle of cryptographic keys in [supported KMS providers](#kms-providers).
1. Create a named cryptographic key of a specified type:
```shell-session
$ vault write -f keymgmt/key/example-key type="rsa-2048"
Success! Data written to: keymgmt/key/example-key
```
Keys created by the secrets engine are considered general-purpose until
they're distributed to a KMS provider.
1. Configure a KMS provider:
```shell-session
$ vault write keymgmt/kms/example-kms \
provider="azurekeyvault" \
key_collection="keyvault-name" \
credentials=client_id="a0454cd1-e28e-405e-bc50-7477fa8a00b7" \
credentials=client_secret="eR%HizuCVEpAKgeaUEx" \
credentials=tenant_id="cd4bf224-d114-4f96-9bbc-b8f45751c43f"
```
Conceptually, a KMS provider resource represents a destination for keys to be distributed to
and subsequently managed in. It is configured using a generic set of parameters. The values
supplied to the generic set of parameters will differ depending on the specified `provider`.
This operation creates a KMS provider that represents a named Azure Key Vault instance.
This is accomplished by specifying the `azurekeyvault` provider along with other provider-specific
parameter values. For details on how to configure each supported KMS provider, see the
[KMS Providers](#kms-providers) section.
1. Distribute a key to a KMS provider:
```shell-session
$ vault write keymgmt/kms/example-kms/key/example-key \
purpose="encrypt,decrypt" \
protection="hsm"
```
This operation distributes a **copy** of the named key to the KMS provider with a specific
`purpose` and `protection`. The `purpose` defines the set of cryptographic capabilities
that the key will have in the KMS provider. The `protection` defines where cryptographic
operations are performed with the key in the KMS provider. See the API documentation for a list of
supported [purpose](/vault/api-docs/secret/key-management#purpose) and [protection](/vault/api-docs/secret/key-management#protection)
values.
~> **Note:** The amount of time it takes to distribute a key to a KMS provider is proportional to the
number of versions that the key has. If a timeout occurs when distributing a key to a KMS
provider, you may need to increase the [VAULT_CLIENT_TIMEOUT](/vault/docs/commands#vault_client_timeout).
1. Rotate a key:
```shell-session
$ vault write -f keymgmt/key/example-key/rotate
```
Rotating a key creates a new key version that contains new key material. The key will be rotated
in both Vault and the KMS provider that the key has been distributed to. The new key version
will be enabled and set as the current version for cryptographic operations in the KMS provider.
1. Enable or disable key versions:
```shell-session
$ vault write keymgmt/key/example-key min_enabled_version=2
```
The `min_enabled_version` of a key can be updated in order to enable or disable sequences of
key versions. All versions of the key less than the `min_enabled_version` will be disabled for
cryptographic operations in the KMS provider that the key has been distributed to. Setting a
`min_enabled_version` of `0` means that all key versions will be enabled.
1. Remove a key from a KMS provider:
```shell-session
$ vault delete keymgmt/kms/example-kms/key/example-key
```
This operation results in the key being deleted from the KMS provider. The key will still exist
in the secrets engine and can be redistributed to a KMS provider at a later time.
To permanently delete the key from the secrets engine, the [delete key](/vault/api-docs/secret/key-management#delete-key)
API may be invoked.
## Key types
The key management secrets engine supports generation of the following key types:
- `aes256-gcm96` - AES-GCM with a 256-bit AES key and a 96-bit nonce (symmetric)
- `rsa-2048` - RSA with bit size of 2048 (asymmetric)
- `rsa-3072` - RSA with bit size of 3072 (asymmetric)
- `rsa-4096` - RSA with bit size of 4096 (asymmetric)
- `ecdsa-p256` - ECDSA using the P-256 elliptic curve (asymmetric)
- `ecdsa-p384` - ECDSA using the P-384 elliptic curve (asymmetric)
- `ecdsa-p521` - ECDSA using the P-521 elliptic curve (asymmetric)
## KMS providers
The key management secrets engine supports lifecycle management of keys in the following
KMS providers:
- [Azure Key Vault](/vault/docs/secrets/key-management/azurekeyvault)
- [AWS KMS](/vault/docs/secrets/key-management/awskms)
- [GCP Cloud KMS](/vault/docs/secrets/key-management/gcpkms)
Refer to the provider-specific documentation for details on how to properly configure each provider.
## Compatibility
The following table defines which key types are compatible with each KMS provider.
| Key Type | Azure Key Vault | AWS KMS | GCP Cloud KMS |
| -------------- | --------------- | ------- | ------------- |
| `aes256-gcm96` | No | **Yes** | **Yes** |
| `rsa-2048` | **Yes** | No | **Yes** |
| `rsa-3072` | **Yes** | No | **Yes** |
| `rsa-4096` | **Yes** | No | **Yes** |
| `ecdsa-p256` | No | No | **Yes** |
| `ecdsa-p384` | No | No | **Yes** |
| `ecdsa-p521` | No | No | No |
## API
The key management secrets engine has a full HTTP API. Please see the
[Key Management Secrets Engine API](/vault/api-docs/secret/key-management) for more
details. | vault | layout docs page title Key Management Secrets Engines description The key management secrets engine provides a consistent workflow for distribution and lifecycle management of cryptographic keys in various key management service KMS providers Key management secrets engine include alerts enterprise and hcp mdx Key management secrets engine requires Vault Enterprise https www hashicorp com products vault pricing with the Advanced Data Protection ADP module The key management secrets engine provides a consistent workflow for distribution and lifecycle management of cryptographic keys in various key management service KMS providers It allows organizations to maintain centralized control of their keys in Vault while still taking advantage of cryptographic capabilities native to the KMS providers The secrets engine generates and owns original copies of key material When an operator decides to distribute and manage the lifecycle of a key in one of the supported KMS providers kms providers a copy of the key material is distributed This provides additional durability and disaster recovery means for the complete lifecycle of the key in the KMS provider Key material will always be securely transferred in accordance with the key import specification kms providers of the supported KMS providers Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the key management secrets engine shell session vault secrets enable keymgmt Success Enabled the keymgmt secrets engine at keymgmt By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument Usage After the secrets engine is mounted and a user machine has a Vault token with the proper permission it can use this secrets engine to generate distribute and manage the lifecycle of cryptographic keys in supported KMS providers kms providers 1 Create a named cryptographic key of a specified type shell session vault write f keymgmt key example key type rsa 2048 Success Data written to keymgmt key example key Keys created by the secrets engine are considered general purpose until they re distributed to a KMS provider 1 Configure a KMS provider shell session vault write keymgmt kms example kms provider azurekeyvault key collection keyvault name credentials client id a0454cd1 e28e 405e bc50 7477fa8a00b7 credentials client secret eR HizuCVEpAKgeaUEx credentials tenant id cd4bf224 d114 4f96 9bbc b8f45751c43f Conceptually a KMS provider resource represents a destination for keys to be distributed to and subsequently managed in It is configured using a generic set of parameters The values supplied to the generic set of parameters will differ depending on the specified provider This operation creates a KMS provider that represents a named Azure Key Vault instance This is accomplished by specifying the azurekeyvault provider along with other provider specific parameter values For details on how to configure each supported KMS provider see the KMS Providers kms providers section 1 Distribute a key to a KMS provider shell session vault write keymgmt kms example kms key example key purpose encrypt decrypt protection hsm This operation distributes a copy of the named key to the KMS provider with a specific purpose and protection The purpose defines the set of cryptographic capabilities that the key will have in the KMS provider The protection defines where cryptographic operations are performed with the key in the KMS provider See the API documentation for a list of supported purpose vault api docs secret key management purpose and protection vault api docs secret key management protection values Note The amount of time it takes to distribute a key to a KMS provider is proportional to the number of versions that the key has If a timeout occurs when distributing a key to a KMS provider you may need to increase the VAULT CLIENT TIMEOUT vault docs commands vault client timeout 1 Rotate a key shell session vault write f keymgmt key example key rotate Rotating a key creates a new key version that contains new key material The key will be rotated in both Vault and the KMS provider that the key has been distributed to The new key version will be enabled and set as the current version for cryptographic operations in the KMS provider 1 Enable or disable key versions shell session vault write keymgmt key example key min enabled version 2 The min enabled version of a key can be updated in order to enable or disable sequences of key versions All versions of the key less than the min enabled version will be disabled for cryptographic operations in the KMS provider that the key has been distributed to Setting a min enabled version of 0 means that all key versions will be enabled 1 Remove a key from a KMS provider shell session vault delete keymgmt kms example kms key example key This operation results in the key being deleted from the KMS provider The key will still exist in the secrets engine and can be redistributed to a KMS provider at a later time To permanently delete the key from the secrets engine the delete key vault api docs secret key management delete key API may be invoked Key types The key management secrets engine supports generation of the following key types aes256 gcm96 AES GCM with a 256 bit AES key and a 96 bit nonce symmetric rsa 2048 RSA with bit size of 2048 asymmetric rsa 3072 RSA with bit size of 3072 asymmetric rsa 4096 RSA with bit size of 4096 asymmetric ecdsa p256 ECDSA using the P 256 elliptic curve asymmetric ecdsa p384 ECDSA using the P 384 elliptic curve asymmetric ecdsa p521 ECDSA using the P 521 elliptic curve asymmetric KMS providers The key management secrets engine supports lifecycle management of keys in the following KMS providers Azure Key Vault vault docs secrets key management azurekeyvault AWS KMS vault docs secrets key management awskms GCP Cloud KMS vault docs secrets key management gcpkms Refer to the provider specific documentation for details on how to properly configure each provider Compatibility The following table defines which key types are compatible with each KMS provider Key Type Azure Key Vault AWS KMS GCP Cloud KMS aes256 gcm96 No Yes Yes rsa 2048 Yes No Yes rsa 3072 Yes No Yes rsa 4096 Yes No Yes ecdsa p256 No No Yes ecdsa p384 No No Yes ecdsa p521 No No No API The key management secrets engine has a full HTTP API Please see the Key Management Secrets Engine API vault api docs secret key management for more details |
vault key management secrets engine using azurekeyvault provider Configure the key management secrets engine and distribute the Vault managed keys to the target Azure Key Vault instance To manage the lifecycle of the Azure Key Vault keys you need to setup the page title Azure Key Vault setup guide layout docs Setup guide Azure Key Vault | ---
layout: docs
page_title: Azure Key Vault setup guide
description: Configure the key management secrets engine, and distribute the Vault-managed keys to the target Azure Key Vault instance.
---
# Setup guide - Azure Key Vault
To manage the lifecycle of the Azure Key Vault keys, you need to setup the
key management secrets engine using `azurekeyvault` provider.
## Setup
1. Enable the key management secrets engine.
```shell-session
$ vault secrets enable keymgmt
Success! Enabled the keymgmt secrets engine at: keymgmt/
```
1. Configure a KMS provider resource named, `example-kms`.
```shell-session
$ vault write keymgmt/kms/example-kms \
provider="azurekeyvault" \
key_collection="keyvault-name" \
credentials=client_id="a0454cd1-e28e-405e-bc50-7477fa8a00b7" \
credentials=client_secret="eR%HizuCVEpAKgeaUEx" \
credentials=tenant_id="cd4bf224-d114-4f96-9bbc-b8f45751c43f"
```
The command specified the following:
- The full path to this KMS provider instance in Vault
(`keymgmt/kms/example-kms`).
- A key collection, which corresponds to the name of the key vault instance
in Azure.
- The KMS provider is set to `azurekeyvault`.
- Azure client ID credential, that can also be specified with
`AZURE_CLIENT_ID` environment variable.
- Azure client secret credential, that can also be specified with
`AZURE_CLIENT_SECRET `environment variable.
- Azure tenant ID credential, that can also be specified with
`AZURE_TENANT_ID` environment variable.
<Tip title="API documentation">
Refer to the Azure Key Vault [API
documentation](/vault/api-docs/secret/key-management/azurekeyvault) for a
detailed description of individual configuration parameters.
</Tip>
## Usage
1. Write a pair of RSA-2048 keys to the secrets engine. The following command
will write a new key of type **rsa-2048** to the path `keymgmt/key/rsa-1`.
```shell-session
$ vault write keymgmt/key/rsa-1 type="rsa-2048"
Success! Data written to: keymgmt/key/rsa-1
```
The key management secrets engine currently supports generation of the key
types specified in [Key
Types](/vault/docs/secrets/key-management#key-types).
Based on the
[Compatibility](/vault/docs/secrets/key-management#compatibility) section of
the documentation, Azure Key Vault currently supports use of RSA-2048,
RSA-3072, and RSA-4096 key types.
1. Read the **rsa-1** key you created. Use JSON as the output and pipe that into
`jq`.
```shell-session
$ vault read -format=json keymgmt/key/rsa-1 | jq
```
1. To use the keys you wrote, you must distribute them to the key vault.
Distribute the **rsa-1** key to the key vault at the path
`keymgmt/kms/example-kms/key/rsa-1`.
```shell-session
$ vault write keymgmt/kms/example-kms/key/rsa-1 \
purpose="encrypt,decrypt" \
protection="hsm"
```
Here you are instructing Vault to distribute the key and specify that its
purpose is only to encrypt and decrypt. The protection type is dependent on
the cloud provider and the value is either `hsm` or `software`. In the case
of Azure, you specify `hsm` for the protection type. The key will be
securely delivered to the key vault instance according to the [Azure Bring
Your Own
Key](https://docs.microsoft.com/en-us/azure/key-vault/keys/byok-specification)
(BYOK) specification.
1. You can list the keys that have been distributed to the Azure Key Vault instance.
```shell-session
$ vault list keymgmt/kms/keyvault/key/
Keys
----
rsa-1
```
1. Rotate the rsa-1 key.
```shell-session
$ vault write -f keymgmt/key/rsa-1/rotate
Success! Data written to: keymgmt/key/rsa-1/rotate
```
1. Confirm successful key rotation by reading the key, and getting the value of
`.data.latest_version`.
```shell-session
$ vault read -format=json keymgmt/key/rsa-1 | jq '.data.latest_version'
2
```
## Azure private link
The secrets engine can be configured to communicate with Azure Key Vault
instances using [Azure Private
Endpoints](https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-overview).
Follow the guide at [Integrate Key Vault with Azure Private
Link](https://docs.microsoft.com/en-us/azure/key-vault/general/private-link-service?tabs=portal)
to set up a Private Endpoint for your target Key Vault instance in Azure. The
Private Endpoint must be network reachable by Vault. This means Vault needs to
be running in the same virtual network or a peered virtual network to properly
resolve the Key Vault domain name to the Private Endpoint IP address.
The Private Endpoint configuration relies on a correct [Azure Private
DNS](https://docs.microsoft.com/en-us/azure/dns/private-dns-overview)
integration. From the host that Vault is running on, follow the steps in
[Validate that the private link connection
works](https://docs.microsoft.com/en-us/azure/key-vault/general/private-link-service?tabs=portal#validate-that-the-private-link-connection-works)
to ensure that the Key Vault domain name resolves to the Private Endpoint IP
address you've configured.
```shell-session
$ nslookup <keyvault-name>.vault.azure.net
Non-authoritative answer:
Name:
Address: 10.0.2.5 (private IP address)
Aliases: <keyvault-name>.vault.azure.net
<keyvault-name>.privatelink.vaultcore.azure.net
```
The secrets engine doesn't require special configuration to communicate with a
Key Vault instance over an Azure Private Endpoint. For example, the given [KMS
configuration](/vault/docs/secrets/key-management/azurekeyvault#configuration)
will result in the secrets engine resolving a Key Vault domain name of
`keyvault-name.vault.azure.net` to the Private Endpoint IP address. Note that
it's possible to change the Key Vault DNS suffix using the
[environment](/vault/api-docs/secret/key-management/azurekeyvault#environment)
configuration parameter or `AZURE_ENVIRONMENT` environment variable.
## Terraform example
If you are familiar with [Terraform](/terraform/), you can use it to deploy the
necessary Azure infrastructure.
```hcl
provider "azuread" {
version = "=0.11.0"
}
provider "azurerm" {
features {
key_vault {
purge_soft_delete_on_destroy = true
}
}
}
resource "random_id" "app_rg_name" {
byte_length = 3
}
resource "random_id" "keyvault_name" {
byte_length = 3
}
data "azurerm_client_config" "current" {}
resource "azuread_application" "key_vault_app" {
name = "app-${random_id.app_rg_name.hex}"
homepage = "http://homepage${random_id.app_rg_name.b64_url}"
identifier_uris = ["http://uri${random_id.app_rg_name.b64_url}"]
reply_urls = ["http://replyur${random_id.app_rg_name.b64_url}"]
available_to_other_tenants = false
oauth2_allow_implicit_flow = true
}
resource "azuread_service_principal" "key_vault_sp" {
application_id = azuread_application.key_vault_app.application_id
app_role_assignment_required = false
}
resource "random_password" "password" {
length = 24
special = true
override_special = "%@"
}
resource "azuread_service_principal_password" "key_vault_sp_pwd" {
service_principal_id = azuread_service_principal.key_vault_sp.id
value = random_password.password.result
end_date = "2099-01-01T01:02:03Z"
}
resource "azurerm_resource_group" "key_vault_rg" {
name = "learn-rg-${random_id.app_rg_name.hex}"
location = "West US"
}
resource "azurerm_key_vault" "key_vault_kv" {
name = "learn-keyvault-${random_id.keyvault_name.hex}"
location = azurerm_resource_group.key_vault_rg.location
resource_group_name = azurerm_resource_group.key_vault_rg.name
sku_name = "premium"
soft_delete_enabled = true
tenant_id = data.azurerm_client_config.current.tenant_id
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"backup",
"create",
"decrypt",
"delete",
"encrypt",
"get",
"import",
"list",
"purge",
"recover",
"restore",
"sign",
"unwrapKey",
"update",
"verify",
"wrapKey"
]
}
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = azuread_service_principal.key_vault_sp.object_id
key_permissions = [
"create",
"delete",
"get",
"import",
"update"
]
}
}
output "key_vault_1_name" {
value = azurerm_key_vault.key_vault_kv.name
}
output "tenant_id" {
value = data.azurerm_client_config.current.tenant_id
}
output "client_id" {
value = azuread_application.key_vault_app.application_id
}
output "client_secret" {
value = azuread_service_principal_password.key_vault_sp_pwd.value
}
``` | vault | layout docs page title Azure Key Vault setup guide description Configure the key management secrets engine and distribute the Vault managed keys to the target Azure Key Vault instance Setup guide Azure Key Vault To manage the lifecycle of the Azure Key Vault keys you need to setup the key management secrets engine using azurekeyvault provider Setup 1 Enable the key management secrets engine shell session vault secrets enable keymgmt Success Enabled the keymgmt secrets engine at keymgmt 1 Configure a KMS provider resource named example kms shell session vault write keymgmt kms example kms provider azurekeyvault key collection keyvault name credentials client id a0454cd1 e28e 405e bc50 7477fa8a00b7 credentials client secret eR HizuCVEpAKgeaUEx credentials tenant id cd4bf224 d114 4f96 9bbc b8f45751c43f The command specified the following The full path to this KMS provider instance in Vault keymgmt kms example kms A key collection which corresponds to the name of the key vault instance in Azure The KMS provider is set to azurekeyvault Azure client ID credential that can also be specified with AZURE CLIENT ID environment variable Azure client secret credential that can also be specified with AZURE CLIENT SECRET environment variable Azure tenant ID credential that can also be specified with AZURE TENANT ID environment variable Tip title API documentation Refer to the Azure Key Vault API documentation vault api docs secret key management azurekeyvault for a detailed description of individual configuration parameters Tip Usage 1 Write a pair of RSA 2048 keys to the secrets engine The following command will write a new key of type rsa 2048 to the path keymgmt key rsa 1 shell session vault write keymgmt key rsa 1 type rsa 2048 Success Data written to keymgmt key rsa 1 The key management secrets engine currently supports generation of the key types specified in Key Types vault docs secrets key management key types Based on the Compatibility vault docs secrets key management compatibility section of the documentation Azure Key Vault currently supports use of RSA 2048 RSA 3072 and RSA 4096 key types 1 Read the rsa 1 key you created Use JSON as the output and pipe that into jq shell session vault read format json keymgmt key rsa 1 jq 1 To use the keys you wrote you must distribute them to the key vault Distribute the rsa 1 key to the key vault at the path keymgmt kms example kms key rsa 1 shell session vault write keymgmt kms example kms key rsa 1 purpose encrypt decrypt protection hsm Here you are instructing Vault to distribute the key and specify that its purpose is only to encrypt and decrypt The protection type is dependent on the cloud provider and the value is either hsm or software In the case of Azure you specify hsm for the protection type The key will be securely delivered to the key vault instance according to the Azure Bring Your Own Key https docs microsoft com en us azure key vault keys byok specification BYOK specification 1 You can list the keys that have been distributed to the Azure Key Vault instance shell session vault list keymgmt kms keyvault key Keys rsa 1 1 Rotate the rsa 1 key shell session vault write f keymgmt key rsa 1 rotate Success Data written to keymgmt key rsa 1 rotate 1 Confirm successful key rotation by reading the key and getting the value of data latest version shell session vault read format json keymgmt key rsa 1 jq data latest version 2 Azure private link The secrets engine can be configured to communicate with Azure Key Vault instances using Azure Private Endpoints https docs microsoft com en us azure private link private endpoint overview Follow the guide at Integrate Key Vault with Azure Private Link https docs microsoft com en us azure key vault general private link service tabs portal to set up a Private Endpoint for your target Key Vault instance in Azure The Private Endpoint must be network reachable by Vault This means Vault needs to be running in the same virtual network or a peered virtual network to properly resolve the Key Vault domain name to the Private Endpoint IP address The Private Endpoint configuration relies on a correct Azure Private DNS https docs microsoft com en us azure dns private dns overview integration From the host that Vault is running on follow the steps in Validate that the private link connection works https docs microsoft com en us azure key vault general private link service tabs portal validate that the private link connection works to ensure that the Key Vault domain name resolves to the Private Endpoint IP address you ve configured shell session nslookup keyvault name vault azure net Non authoritative answer Name Address 10 0 2 5 private IP address Aliases keyvault name vault azure net keyvault name privatelink vaultcore azure net The secrets engine doesn t require special configuration to communicate with a Key Vault instance over an Azure Private Endpoint For example the given KMS configuration vault docs secrets key management azurekeyvault configuration will result in the secrets engine resolving a Key Vault domain name of keyvault name vault azure net to the Private Endpoint IP address Note that it s possible to change the Key Vault DNS suffix using the environment vault api docs secret key management azurekeyvault environment configuration parameter or AZURE ENVIRONMENT environment variable Terraform example If you are familiar with Terraform terraform you can use it to deploy the necessary Azure infrastructure hcl provider azuread version 0 11 0 provider azurerm features key vault purge soft delete on destroy true resource random id app rg name byte length 3 resource random id keyvault name byte length 3 data azurerm client config current resource azuread application key vault app name app random id app rg name hex homepage http homepage random id app rg name b64 url identifier uris http uri random id app rg name b64 url reply urls http replyur random id app rg name b64 url available to other tenants false oauth2 allow implicit flow true resource azuread service principal key vault sp application id azuread application key vault app application id app role assignment required false resource random password password length 24 special true override special resource azuread service principal password key vault sp pwd service principal id azuread service principal key vault sp id value random password password result end date 2099 01 01T01 02 03Z resource azurerm resource group key vault rg name learn rg random id app rg name hex location West US resource azurerm key vault key vault kv name learn keyvault random id keyvault name hex location azurerm resource group key vault rg location resource group name azurerm resource group key vault rg name sku name premium soft delete enabled true tenant id data azurerm client config current tenant id access policy tenant id data azurerm client config current tenant id object id data azurerm client config current object id key permissions backup create decrypt delete encrypt get import list purge recover restore sign unwrapKey update verify wrapKey access policy tenant id data azurerm client config current tenant id object id azuread service principal key vault sp object id key permissions create delete get import update output key vault 1 name value azurerm key vault key vault kv name output tenant id value data azurerm client config current tenant id output client id value azuread application key vault app application id output client secret value azuread service principal password key vault sp pwd value |
vault To manage the lifecycle of the GCP Cloud KMS key rings you need to setup the Setup guide GCP Cloud KMS Configure the key management secrets engine and distribute the Vault managed keys to the target GCP Cloud KMS page title GCP Cloud KMS Key Management Secrets Engines layout docs key management secrets engine using gcpckms provider | ---
layout: docs
page_title: GCP Cloud KMS - Key Management - Secrets Engines
description: Configure the key management secrets engine, and distribute the Vault-managed keys to the target GCP Cloud KMS.
---
# Setup guide - GCP Cloud KMS
To manage the lifecycle of the GCP Cloud KMS key rings, you need to setup the
key management secrets engine using `gcpckms` provider.
## Setup
1. Enable the key management secrets engine.
```shell-session
$ vault secrets enable keymgmt
```
1. Configure the KMS provider resource named, `example-kms`.
```shell-session
$ vault write keymgmt/kms/example-kms \
provider="gcpckms" \
key_collection="projects/<project-id>/locations/<location>/keyRings/<keyring>" \
credentials=service_account_file="/path/to/service_account/credentials.json"
```
The command specified the following:
- The full path to this KMS provider instance in Vault
(`keymgmt/kms/example-kms`).
- The KMS provider type is set to `gcpckms`.
- A key collection, which refers to the [resource
ID](https://cloud.google.com/kms/docs/resource-hierarchy#retrieve_resource_id)
of an existing GCP Cloud KMS key ring; this value cannot be changed after
creation.
- Credentials file to use for authentication with GCP Cloud KMS. Supplying
values for this parameter is optional, as credentials may also be
specified as the `GOOGLE_CREDENTIALS` environment variable or default
application credentials.
<Tip title="API documentation">
Refer to the GCP Cloud KMS [API
documentation](/vault/api-docs/secret/key-management/gcpkms) for a detailed
description of individual configuration parameters.
</Tip>
## Usage
1. Write a new key of type **aes256-gcm96** to the path `keymgmt/key/aes256-gcm96`.
```shell-session
$ vault write keymgmt/key/aes256-gcm96 type="aes256-gcm96"
```
1. Read the **aes256-gcm96** key, use JSON as the output, and pipe that into `jq`.
```shell-session
$ vault read -format=json keymgmt/key/aes256-gcm96 | jq
```
**Example output:**
<CodeBlockConfig hideClipboard>
```json
{
"request_id": "631f98de-b755-9863-40db-f789ff9ff10a",
"lease_id": "",
"lease_duration": 0,
"renewable": false,
"data": {
"deletion_allowed": false,
"latest_version": 1,
"min_enabled_version": 1,
"name": "aes256-gcm96",
"type": "aes256-gcm96",
"versions": {
"1": {
"creation_time": "2021-11-16T13:07:17.878864-05:00"
}
}
},
"warnings": null
}
```
</CodeBlockConfig>
Notice the value of `versions`; it is **1** since this is the first version
of the key that Vault knows about. This will figure into the example on key
rotation later.
1. To use the keys you wrote, you must distribute them to the Cloud KMS. Add the
**aes256-gcm96** key to the Cloud KMS at the path
`keymgmt/kms/example-kms/key/aes256-gcm96`.
```shell-session
$ vault write keymgmt/kms/example-kms/key/aes256-gcm96 \
purpose="encrypt,decrypt" \
protection="hsm"
```
1. List the keys that have been distributed to the Cloud KMS instance.
```shell-session
$ vault list keymgmt/kms/gcpckms/key/
Keys
----
aes256-gcm96
```
1. Rotate the key.
```shell-session
$ vault write -f keymgmt/key/aes256-gcm96/rotate
```
1. Confirm successful key rotation by reading the key, and get the value of
`.data.latest_version`.
```shell-session
$ vault read -format=json keymgmt/key/aes256-gcm96 | jq '.data.latest_version'
2
```
The key is now at version 2; in the Cloud Console, the key will be expected
to have a different version string than the original value under **Primary
version** as shown in the Cloud Console UI screenshot. | vault | layout docs page title GCP Cloud KMS Key Management Secrets Engines description Configure the key management secrets engine and distribute the Vault managed keys to the target GCP Cloud KMS Setup guide GCP Cloud KMS To manage the lifecycle of the GCP Cloud KMS key rings you need to setup the key management secrets engine using gcpckms provider Setup 1 Enable the key management secrets engine shell session vault secrets enable keymgmt 1 Configure the KMS provider resource named example kms shell session vault write keymgmt kms example kms provider gcpckms key collection projects project id locations location keyRings keyring credentials service account file path to service account credentials json The command specified the following The full path to this KMS provider instance in Vault keymgmt kms example kms The KMS provider type is set to gcpckms A key collection which refers to the resource ID https cloud google com kms docs resource hierarchy retrieve resource id of an existing GCP Cloud KMS key ring this value cannot be changed after creation Credentials file to use for authentication with GCP Cloud KMS Supplying values for this parameter is optional as credentials may also be specified as the GOOGLE CREDENTIALS environment variable or default application credentials Tip title API documentation Refer to the GCP Cloud KMS API documentation vault api docs secret key management gcpkms for a detailed description of individual configuration parameters Tip Usage 1 Write a new key of type aes256 gcm96 to the path keymgmt key aes256 gcm96 shell session vault write keymgmt key aes256 gcm96 type aes256 gcm96 1 Read the aes256 gcm96 key use JSON as the output and pipe that into jq shell session vault read format json keymgmt key aes256 gcm96 jq Example output CodeBlockConfig hideClipboard json request id 631f98de b755 9863 40db f789ff9ff10a lease id lease duration 0 renewable false data deletion allowed false latest version 1 min enabled version 1 name aes256 gcm96 type aes256 gcm96 versions 1 creation time 2021 11 16T13 07 17 878864 05 00 warnings null CodeBlockConfig Notice the value of versions it is 1 since this is the first version of the key that Vault knows about This will figure into the example on key rotation later 1 To use the keys you wrote you must distribute them to the Cloud KMS Add the aes256 gcm96 key to the Cloud KMS at the path keymgmt kms example kms key aes256 gcm96 shell session vault write keymgmt kms example kms key aes256 gcm96 purpose encrypt decrypt protection hsm 1 List the keys that have been distributed to the Cloud KMS instance shell session vault list keymgmt kms gcpckms key Keys aes256 gcm96 1 Rotate the key shell session vault write f keymgmt key aes256 gcm96 rotate 1 Confirm successful key rotation by reading the key and get the value of data latest version shell session vault read format json keymgmt key aes256 gcm96 jq data latest version 2 The key is now at version 2 in the Cloud Console the key will be expected to have a different version string than the original value under Primary version as shown in the Cloud Console UI screenshot |
vault page title One Time SSH Passwords OTP SSH Secrets Engines host using a helper command on the remote host to perform verification to issue a One Time Password every time a client wants to SSH into a remote The One Time SSH Password OTP SSH secrets engine type allows a Vault server layout docs One Time SSH passwords | ---
layout: docs
page_title: One-Time SSH Passwords (OTP) - SSH - Secrets Engines
description: |-
The One-Time SSH Password (OTP) SSH secrets engine type allows a Vault server
to issue a One-Time Password every time a client wants to SSH into a remote
host using a helper command on the remote host to perform verification.
---
# One-Time SSH passwords
The One-Time SSH Password (OTP) SSH secrets engine type allows a Vault server to
issue a One-Time Password every time a client wants to SSH into a remote host
using a helper command on the remote host to perform verification.
An authenticated client requests credentials from the Vault server and, if
authorized, is issued an OTP. When the client establishes an SSH connection to
the desired remote host, the OTP used during SSH authentication is received by
the Vault helper, which then validates the OTP with the Vault server. The Vault
server then deletes this OTP, ensuring that it is only used once.
Since the Vault server is contacted during SSH connection establishment, every
login attempt and the correlating Vault lease information is logged to the audit
secrets engine.
See [Vault-SSH-Helper](https://github.com/hashicorp/vault-ssh-helper) for
details on the helper.
This page will show a quick start for this secrets engine. For detailed
documentation on every path, use `vault path-help` after mounting the secrets
engine.
### Drawbacks
The main concern with the OTP secrets engine type is the remote host's
connection to Vault; if compromised, an attacker could spoof the Vault server
returning a successful request. This risk can be mitigated by using TLS for the
connection to Vault and checking certificate validity; future enhancements to
this secrets engine may allow for extra security on top of what TLS provides.
### Mount the secrets engine
```shell-session
$ vault secrets enable ssh
Successfully mounted 'ssh' at 'ssh'!
```
### Create a role
Create a role with the `key_type` parameter set to `otp`. All of the machines
represented by the role's CIDR list should have helper properly installed and
configured.
```shell-session
$ vault write ssh/roles/otp_key_role \
key_type=otp \
default_user=username \
cidr_list=x.x.x.x/y,m.m.m.m/n
Success! Data written to: ssh/roles/otp_key_role
```
### Create a credential
Create an OTP credential for an IP of the remote host that belongs to
`otp_key_role`.
```shell-session
$ vault write ssh/creds/otp_key_role ip=x.x.x.x
Key Value
lease_id ssh/creds/otp_key_role/73bbf513-9606-4bec-816c-5a2f009765a5
lease_duration 600
lease_renewable false
port 22
username username
ip x.x.x.x
key 2f7e25a2-24c9-4b7b-0d35-27d5e5203a5c
key_type otp
```
### Establish an SSH session
```shell-session
$ ssh [email protected]
Password: <Enter OTP>
[email protected]:~$
```
### Automate it!
A single CLI command can be used to create a new OTP and invoke SSH with the
correct parameters to connect to the host.
```shell-session
$ vault ssh -role otp_key_role -mode otp [email protected]
OTP for the session is `b4d47e1b-4879-5f4e-ce5c-7988d7986f37`
[Note: Install `sshpass` to automate typing in OTP]
Password: <Enter OTP>
```
The OTP will be entered automatically using `sshpass` if it is installed.
```shell-session
$ vault ssh -role otp_key_role -mode otp -strict-host-key-checking=no [email protected]
username@<IP of remote host>:~$
```
Note: `sshpass` cannot handle host key checking. Host key checking can be
disabled by setting `-strict-host-key-checking=no`.
## Tutorial
Refer to the [SSH Secrets Engine: One-Time SSH
Password](/vault/tutorials/secrets-management/ssh-otp) tutorial
to learn how to use the Vault SSH secrets engine to secure authentication and authorization for access to machines.
## API
The SSH secrets engine has a full HTTP API. Please see the
[SSH secrets engine API](/vault/api-docs/secret/ssh) for more
details. | vault | layout docs page title One Time SSH Passwords OTP SSH Secrets Engines description The One Time SSH Password OTP SSH secrets engine type allows a Vault server to issue a One Time Password every time a client wants to SSH into a remote host using a helper command on the remote host to perform verification One Time SSH passwords The One Time SSH Password OTP SSH secrets engine type allows a Vault server to issue a One Time Password every time a client wants to SSH into a remote host using a helper command on the remote host to perform verification An authenticated client requests credentials from the Vault server and if authorized is issued an OTP When the client establishes an SSH connection to the desired remote host the OTP used during SSH authentication is received by the Vault helper which then validates the OTP with the Vault server The Vault server then deletes this OTP ensuring that it is only used once Since the Vault server is contacted during SSH connection establishment every login attempt and the correlating Vault lease information is logged to the audit secrets engine See Vault SSH Helper https github com hashicorp vault ssh helper for details on the helper This page will show a quick start for this secrets engine For detailed documentation on every path use vault path help after mounting the secrets engine Drawbacks The main concern with the OTP secrets engine type is the remote host s connection to Vault if compromised an attacker could spoof the Vault server returning a successful request This risk can be mitigated by using TLS for the connection to Vault and checking certificate validity future enhancements to this secrets engine may allow for extra security on top of what TLS provides Mount the secrets engine shell session vault secrets enable ssh Successfully mounted ssh at ssh Create a role Create a role with the key type parameter set to otp All of the machines represented by the role s CIDR list should have helper properly installed and configured shell session vault write ssh roles otp key role key type otp default user username cidr list x x x x y m m m m n Success Data written to ssh roles otp key role Create a credential Create an OTP credential for an IP of the remote host that belongs to otp key role shell session vault write ssh creds otp key role ip x x x x Key Value lease id ssh creds otp key role 73bbf513 9606 4bec 816c 5a2f009765a5 lease duration 600 lease renewable false port 22 username username ip x x x x key 2f7e25a2 24c9 4b7b 0d35 27d5e5203a5c key type otp Establish an SSH session shell session ssh username x x x x Password Enter OTP username x x x x Automate it A single CLI command can be used to create a new OTP and invoke SSH with the correct parameters to connect to the host shell session vault ssh role otp key role mode otp username x x x x OTP for the session is b4d47e1b 4879 5f4e ce5c 7988d7986f37 Note Install sshpass to automate typing in OTP Password Enter OTP The OTP will be entered automatically using sshpass if it is installed shell session vault ssh role otp key role mode otp strict host key checking no username x x x x username IP of remote host Note sshpass cannot handle host key checking Host key checking can be disabled by setting strict host key checking no Tutorial Refer to the SSH Secrets Engine One Time SSH Password vault tutorials secrets management ssh otp tutorial to learn how to use the Vault SSH secrets engine to secure authentication and authorization for access to machines API The SSH secrets engine has a full HTTP API Please see the SSH secrets engine API vault api docs secret ssh for more details |
vault page title Signed SSH Certificates SSH Secrets Engines type an SSH CA signing key is generated or configured at the secrets engine s The signed SSH certificates is the simplest and most powerful in terms of mount layout docs setup complexity and in terms of being platform agnostic When using this | ---
layout: docs
page_title: Signed SSH Certificates - SSH - Secrets Engines
description: >-
The signed SSH certificates is the simplest and most powerful in terms of
setup complexity and in terms of being platform agnostic. When using this
type, an SSH CA signing key is generated or configured at the secrets engine's
mount.
This key will be used to sign other SSH keys.
---
# Signed SSH certificates
The signed SSH certificates is the simplest and most powerful in terms of setup
complexity and in terms of being platform agnostic. By leveraging Vault's
powerful CA capabilities and functionality built into OpenSSH, clients can SSH
into target hosts using their own local SSH keys.
In this section, the term "**client**" refers to the person or machine
performing the SSH operation. The "**host**" refers to the target machine. If
this is confusing, substitute "client" with "user".
This page will show a quick start for this secrets engine. For detailed documentation
on every path, use `vault path-help` after mounting the secrets engine.
## Client key signing
Before a client can request their SSH key be signed, the Vault SSH secrets engine must
be configured. Usually a Vault administrator or security team performs these
steps. It is also possible to automate these actions using a configuration
management tool like Chef, Puppet, Ansible, or Salt.
### Signing key & role configuration
The following steps are performed in advance by a Vault administrator, security
team, or configuration management tooling.
1. Mount the secrets engine. Like all secrets engines in Vault, the SSH secrets engine
must be mounted before use.
```text
$ vault secrets enable -path=ssh-client-signer ssh
Successfully mounted 'ssh' at 'ssh-client-signer'!
```
This enables the SSH secrets engine at the path "ssh-client-signer". It is
possible to mount the same secrets engine multiple times using different
`-path` arguments. The name "ssh-client-signer" is not special - it can be
any name, but this documentation will assume "ssh-client-signer".
1. Configure Vault with a CA for signing client keys using the `/config/ca`
endpoint. If you do not have an internal CA, Vault can generate a keypair for
you.
```text
$ vault write ssh-client-signer/config/ca generate_signing_key=true
Key Value
--- -----
public_key ssh-rsa AAAAB3NzaC1yc2EA...
```
If you already have a keypair, specify the public and private key parts as
part of the payload:
```text
$ vault write ssh-client-signer/config/ca \
private_key="..." \
public_key="..."
```
Regardless of whether it is generated or uploaded, the client signer public
key is accessible via the API at the `/public_key` endpoint or the CLI (see next step).
1. Add the public key to all target host's SSH configuration. This process can
be manual or automated using a configuration management tool. The public key is
accessible via the API and does not require authentication.
```text
$ curl -o /etc/ssh/trusted-user-ca-keys.pem http://127.0.0.1:8200/v1/ssh-client-signer/public_key
```
```text
$ vault read -field=public_key ssh-client-signer/config/ca > /etc/ssh/trusted-user-ca-keys.pem
```
Add the path where the public key contents are stored to the SSH
configuration file as the `TrustedUserCAKeys` option.
```text
# /etc/ssh/sshd_config
# ...
TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem
```
Restart the SSH service to pick up the changes.
1. Create a named Vault role for signing client keys.
~> **IMPORTANT NOTE:** Prior to Vault-1.9, if `"allowed_extensions"` is either empty or not specified in the role,
Vault will assume permissive defaults: any user assigned to the role may specify any arbitrary
extension values as part of the certificate request to the Vault server.
This may have significant impact on third-party systems that rely on an `extensions` field for security-critical information.
In those cases, consider using a template to specify default extensions, and explicitly setting
`"allowed_extensions"` to an arbitrary, non-empty string if the field is empty or not set.
Because of the way some SSH certificate features are implemented, options
are passed as a map. The following example adds the `permit-pty` extension
to the certificate, and allows the user to specify their own values for `permit-pty` and `permit-port-forwarding`
when requesting the certificate.
```text
$ vault write ssh-client-signer/roles/my-role -<<"EOH"
{
"algorithm_signer": "rsa-sha2-256",
"allow_user_certificates": true,
"allowed_users": "*",
"allowed_extensions": "permit-pty,permit-port-forwarding",
"default_extensions": {
"permit-pty": ""
},
"key_type": "ca",
"default_user": "ubuntu",
"ttl": "30m0s"
}
EOH
```
### Client SSH authentication
The following steps are performed by the client (user) that wants to
authenticate to machines managed by Vault. These commands are usually run from
the client's local workstation.
1. Locate or generate the SSH public key. Usually this is `~/.ssh/id_rsa.pub`.
If you do not have an SSH keypair, generate one:
```text
$ ssh-keygen -t rsa -C "[email protected]"
```
1. Ask Vault to sign your **public key**. This file usually ends in `.pub` and
the contents begin with `ssh-rsa ...`.
```text
$ vault write ssh-client-signer/sign/my-role \
public_key=@$HOME/.ssh/id_rsa.pub
Key Value
--- -----
serial_number c73f26d2340276aa
signed_key [email protected] AAAAHHNzaC1...
```
The result will include the serial and the signed key. This signed key is
another public key.
To customize the signing options, use a JSON payload:
```text
$ vault write ssh-client-signer/sign/my-role -<<"EOH"
{
"public_key": "ssh-rsa AAA...",
"valid_principals": "my-user",
"key_id": "custom-prefix",
"extensions": {
"permit-pty": "",
"permit-port-forwarding": ""
}
}
EOH
```
1. Save the resulting signed, public key to disk. Limit permissions as needed.
```text
$ vault write -field=signed_key ssh-client-signer/sign/my-role \
public_key=@$HOME/.ssh/id_rsa.pub > signed-cert.pub
```
If you are saving the certificate directly beside your SSH keypair, suffix
the name with `-cert.pub` (`~/.ssh/id_rsa-cert.pub`). With this naming
scheme, OpenSSH will automatically use it during authentication.
1. (Optional) View enabled extensions, principals, and metadata of the signed
key.
```text
$ ssh-keygen -Lf ~/.ssh/signed-cert.pub
```
1. SSH into the host machine using the signed key. You must supply both the
signed public key from Vault **and** the corresponding private key as
authentication to the SSH call.
```text
$ ssh -i signed-cert.pub -i ~/.ssh/id_rsa [email protected]
```
## Host key signing
For an added layer of security, we recommend enabling host key signing. This is
used in conjunction with client key signing to provide an additional integrity
layer. When enabled, the SSH agent will verify the target host is valid and
trusted before attempting to SSH. This will reduce the probability of a user
accidentally SSHing into an unmanaged or malicious machine.
### Signing key configuration
1. Mount the secrets engine. For the most security, mount at a different path from the
client signer.
```text
$ vault secrets enable -path=ssh-host-signer ssh
Successfully mounted 'ssh' at 'ssh-host-signer'!
```
1. Configure Vault with a CA for signing host keys using the `/config/ca`
endpoint. If you do not have an internal CA, Vault can generate a keypair for
you.
```text
$ vault write ssh-host-signer/config/ca generate_signing_key=true
Key Value
--- -----
public_key ssh-rsa AAAAB3NzaC1yc2EA...
```
If you already have a keypair, specify the public and private key parts as
part of the payload:
```text
$ vault write ssh-host-signer/config/ca \
private_key="..." \
public_key="..."
```
Regardless of whether it is generated or uploaded, the host signer public
key is accessible via the API at the `/public_key` endpoint.
1. Extend host key certificate TTLs.
```text
$ vault secrets tune -max-lease-ttl=87600h ssh-host-signer
```
1. Create a role for signing host keys. Be sure to fill in the list of allowed
domains, set `allow_bare_domains`, or both.
```text
$ vault write ssh-host-signer/roles/hostrole \
key_type=ca \
algorithm_signer=rsa-sha2-256 \
ttl=87600h \
allow_host_certificates=true \
allowed_domains="localdomain,example.com" \
allow_subdomains=true
```
1. Sign the host's SSH public key.
```text
$ vault write ssh-host-signer/sign/hostrole \
cert_type=host \
public_key=@/etc/ssh/ssh_host_rsa_key.pub
Key Value
--- -----
serial_number 3746eb17371540d9
signed_key [email protected] AAAAHHNzaC1y...
```
1. Set the resulting signed certificate as `HostCertificate` in the SSH
configuration on the host machine.
```text
$ vault write -field=signed_key ssh-host-signer/sign/hostrole \
cert_type=host \
public_key=@/etc/ssh/ssh_host_rsa_key.pub > /etc/ssh/ssh_host_rsa_key-cert.pub
```
Set permissions on the certificate to be `0640`:
```text
$ chmod 0640 /etc/ssh/ssh_host_rsa_key-cert.pub
```
Add host key and host certificate to the SSH configuration file.
```text
# /etc/ssh/sshd_config
# ...
# For client keys
TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem
# For host keys
HostKey /etc/ssh/ssh_host_rsa_key
HostCertificate /etc/ssh/ssh_host_rsa_key-cert.pub
```
Restart the SSH service to pick up the changes.
### Client-Side host verification
1. Retrieve the host signing CA public key to validate the host signature of
target machines.
```text
$ curl http://127.0.0.1:8200/v1/ssh-host-signer/public_key
```
```text
$ vault read -field=public_key ssh-host-signer/config/ca
```
1. Add the resulting public key to the `known_hosts` file with authority.
```text
# ~/.ssh/known_hosts
@cert-authority *.example.com ssh-rsa AAAAB3NzaC1yc2EAAA...
```
1. SSH into target machines as usual.
## Troubleshooting
When initially configuring this type of key signing, enable `VERBOSE` SSH
logging to help annotate any errors in the log.
```text
# /etc/ssh/sshd_config
# ...
LogLevel VERBOSE
```
Restart SSH after making these changes.
By default, SSH logs to `/var/log/auth.log`, but so do many other things. To
extract just the SSH logs, use the following:
```shell-session
$ tail -f /var/log/auth.log | grep --line-buffered "sshd"
```
If you are unable to make a connection to the host, the SSH server logs may
provide guidance and insights.
### Name is not a listed principal
If the `auth.log` displays the following messages:
```text
# /var/log/auth.log
key_cert_check_authority: invalid certificate
Certificate invalid: name is not a listed principal
```
The certificate does not permit the username as a listed principal for
authenticating to the system. This is most likely due to an OpenSSH bug (see
[known issues](#known-issues) for more information). This bug does not respect
the `allowed_users` option value of "\*". Here are ways to work around this
issue:
1. Set `default_user` in the role. If you are always authenticating as the same
user, set the `default_user` in the role to the username you are SSHing into the
target machine:
```text
$ vault write ssh/roles/my-role -<<"EOH"
{
"default_user": "YOUR_USER",
// ...
}
EOH
```
1. Set `valid_principals` during signing. In situations where multiple users may
be authenticating to SSH via Vault, set the list of valid principles during key
signing to include the current username:
```text
$ vault write ssh-client-signer/sign/my-role -<<"EOH"
{
"valid_principals": "my-user"
// ...
}
EOH
```
### No prompt after login
If you do not see a prompt after authenticating to the host machine, the signed
certificate may not have the `permit-pty` extension. There are two ways to add
this extension to the signed certificate.
- As part of the role creation
```text
$ vault write ssh-client-signer/roles/my-role -<<"EOH"
{
"default_extensions": {
"permit-pty": ""
}
// ...
}
EOH
```
- As part of the signing operation itself:
```text
$ vault write ssh-client-signer/sign/my-role -<<"EOH"
{
"extensions": {
"permit-pty": ""
}
// ...
}
EOH
```
### No port forwarding
If port forwarding from the guest to the host is not working, the signed
certificate may not have the `permit-port-forwarding` extension. Add the
extension as part of the role creation or signing process to enable port
forwarding. See [no prompt after login](#no-prompt-after-login) for examples.
```json
{
"default_extensions": {
"permit-port-forwarding": ""
}
}
```
### No x11 forwarding
If X11 forwarding from the guest to the host is not working, the signed
certificate may not have the `permit-X11-forwarding` extension. Add the
extension as part of the role creation or signing process to enable X11
forwarding. See [no prompt after login](#no-prompt-after-login) for examples.
```json
{
"default_extensions": {
"permit-X11-forwarding": ""
}
}
```
### No agent forwarding
If agent forwarding from the guest to the host is not working, the signed
certificate may not have the `permit-agent-forwarding` extension. Add the
extension as part of the role creation or signing process to enable agent
forwarding. See [no prompt after login](#no-prompt-after-login) for examples.
```json
{
"default_extensions": {
"permit-agent-forwarding": ""
}
}
```
### Key comments
There are additional steps needed to preserve [comment attributes](https://www.rfc-editor.org/rfc/rfc4716#section-3.3.2)
in keys which ought to be considered if they are required. Private and public
key may have comments applied to them and for example where `ssh-keygen` is used
with its `-C` parameter - similar to:
```shell-session
ssh-keygen -C "...Comments" -N "" -t rsa -b 4096 -f host-ca
```
Adapted key values containing comments must be provided with the key related
parameters as per the Vault CLI and API steps demonstrated below.
```shell-extension
# Using CLI:
vault secrets enable -path=hosts-ca ssh
KEY_PRI=$(cat ~/.ssh/id_rsa | sed -z 's/\n/\\n/g')
KEY_PUB=$(cat ~/.ssh/id_rsa.pub | sed -z 's/\n/\\n/g')
# Create / update keypair in Vault
vault write ssh-client-signer/config/ca \
generate_signing_key=false \
private_key="${KEY_PRI}" \
public_key="${KEY_PUB}"
```
```shell-extension
# Using API:
curl -X POST -H "X-Vault-Token: ..." -d '{"type":"ssh"}' http://127.0.0.1:8200/v1/sys/mounts/hosts-ca
KEY_PRI=$(cat ~/.ssh/id_rsa | sed -z 's/\n/\\n/g')
KEY_PUB=$(cat ~/.ssh/id_rsa.pub | sed -z 's/\n/\\n/g')
tee payload.json <<EOF
{
"generate_signing_key" : false,
"private_key" : "${KEY_PRI}",
"public_key" : "${KEY_PUB}"
}
EOF
# Create / update keypair in Vault
curl -X POST -H "X-Vault-Token: ..." -d @payload.json http://127.0.0.1:8200/v1/hosts-ca/config/ca
```
~> **IMPORTANT:** Do NOT add a private key password since Vault can't decrypt it.
Destroy the keypair and `payload.json` from your hosts immediately after they have been confirmed as successfully uploaded.
### Known issues
- On SELinux-enforcing systems, you may need to adjust related types so that the
SSH daemon is able to read it. For example, adjust the signed host certificate
to be an `sshd_key_t` type.
- On some versions of SSH, you may get the following error:
```text
no separate private key for certificate
```
This is a bug introduced in OpenSSH version 7.2 and fixed in 7.5. See
[OpenSSH bug 2617](https://bugzilla.mindrot.org/show_bug.cgi?id=2617) for
details.
- On some versions of SSH, you may get the following error on target host:
```text
userauth_pubkey: certificate signature algorithm ssh-rsa: signature algorithm not supported [preauth]
```
Fix is to add below line to /etc/ssh/sshd_config
```text
CASignatureAlgorithms ^ssh-rsa
```
The ssh-rsa algorithm is no longer supported in [OpenSSH 8.2](https://www.openssh.com/txt/release-8.2)
## API
The SSH secrets engine has a full HTTP API. Please see the
[SSH secrets engine API](/vault/api-docs/secret/ssh) for more
details. | vault | layout docs page title Signed SSH Certificates SSH Secrets Engines description The signed SSH certificates is the simplest and most powerful in terms of setup complexity and in terms of being platform agnostic When using this type an SSH CA signing key is generated or configured at the secrets engine s mount This key will be used to sign other SSH keys Signed SSH certificates The signed SSH certificates is the simplest and most powerful in terms of setup complexity and in terms of being platform agnostic By leveraging Vault s powerful CA capabilities and functionality built into OpenSSH clients can SSH into target hosts using their own local SSH keys In this section the term client refers to the person or machine performing the SSH operation The host refers to the target machine If this is confusing substitute client with user This page will show a quick start for this secrets engine For detailed documentation on every path use vault path help after mounting the secrets engine Client key signing Before a client can request their SSH key be signed the Vault SSH secrets engine must be configured Usually a Vault administrator or security team performs these steps It is also possible to automate these actions using a configuration management tool like Chef Puppet Ansible or Salt Signing key amp role configuration The following steps are performed in advance by a Vault administrator security team or configuration management tooling 1 Mount the secrets engine Like all secrets engines in Vault the SSH secrets engine must be mounted before use text vault secrets enable path ssh client signer ssh Successfully mounted ssh at ssh client signer This enables the SSH secrets engine at the path ssh client signer It is possible to mount the same secrets engine multiple times using different path arguments The name ssh client signer is not special it can be any name but this documentation will assume ssh client signer 1 Configure Vault with a CA for signing client keys using the config ca endpoint If you do not have an internal CA Vault can generate a keypair for you text vault write ssh client signer config ca generate signing key true Key Value public key ssh rsa AAAAB3NzaC1yc2EA If you already have a keypair specify the public and private key parts as part of the payload text vault write ssh client signer config ca private key public key Regardless of whether it is generated or uploaded the client signer public key is accessible via the API at the public key endpoint or the CLI see next step 1 Add the public key to all target host s SSH configuration This process can be manual or automated using a configuration management tool The public key is accessible via the API and does not require authentication text curl o etc ssh trusted user ca keys pem http 127 0 0 1 8200 v1 ssh client signer public key text vault read field public key ssh client signer config ca etc ssh trusted user ca keys pem Add the path where the public key contents are stored to the SSH configuration file as the TrustedUserCAKeys option text etc ssh sshd config TrustedUserCAKeys etc ssh trusted user ca keys pem Restart the SSH service to pick up the changes 1 Create a named Vault role for signing client keys IMPORTANT NOTE Prior to Vault 1 9 if allowed extensions is either empty or not specified in the role Vault will assume permissive defaults any user assigned to the role may specify any arbitrary extension values as part of the certificate request to the Vault server This may have significant impact on third party systems that rely on an extensions field for security critical information In those cases consider using a template to specify default extensions and explicitly setting allowed extensions to an arbitrary non empty string if the field is empty or not set Because of the way some SSH certificate features are implemented options are passed as a map The following example adds the permit pty extension to the certificate and allows the user to specify their own values for permit pty and permit port forwarding when requesting the certificate text vault write ssh client signer roles my role EOH algorithm signer rsa sha2 256 allow user certificates true allowed users allowed extensions permit pty permit port forwarding default extensions permit pty key type ca default user ubuntu ttl 30m0s EOH Client SSH authentication The following steps are performed by the client user that wants to authenticate to machines managed by Vault These commands are usually run from the client s local workstation 1 Locate or generate the SSH public key Usually this is ssh id rsa pub If you do not have an SSH keypair generate one text ssh keygen t rsa C user example com 1 Ask Vault to sign your public key This file usually ends in pub and the contents begin with ssh rsa text vault write ssh client signer sign my role public key HOME ssh id rsa pub Key Value serial number c73f26d2340276aa signed key ssh rsa cert v01 openssh com AAAAHHNzaC1 The result will include the serial and the signed key This signed key is another public key To customize the signing options use a JSON payload text vault write ssh client signer sign my role EOH public key ssh rsa AAA valid principals my user key id custom prefix extensions permit pty permit port forwarding EOH 1 Save the resulting signed public key to disk Limit permissions as needed text vault write field signed key ssh client signer sign my role public key HOME ssh id rsa pub signed cert pub If you are saving the certificate directly beside your SSH keypair suffix the name with cert pub ssh id rsa cert pub With this naming scheme OpenSSH will automatically use it during authentication 1 Optional View enabled extensions principals and metadata of the signed key text ssh keygen Lf ssh signed cert pub 1 SSH into the host machine using the signed key You must supply both the signed public key from Vault and the corresponding private key as authentication to the SSH call text ssh i signed cert pub i ssh id rsa username 10 0 23 5 Host key signing For an added layer of security we recommend enabling host key signing This is used in conjunction with client key signing to provide an additional integrity layer When enabled the SSH agent will verify the target host is valid and trusted before attempting to SSH This will reduce the probability of a user accidentally SSHing into an unmanaged or malicious machine Signing key configuration 1 Mount the secrets engine For the most security mount at a different path from the client signer text vault secrets enable path ssh host signer ssh Successfully mounted ssh at ssh host signer 1 Configure Vault with a CA for signing host keys using the config ca endpoint If you do not have an internal CA Vault can generate a keypair for you text vault write ssh host signer config ca generate signing key true Key Value public key ssh rsa AAAAB3NzaC1yc2EA If you already have a keypair specify the public and private key parts as part of the payload text vault write ssh host signer config ca private key public key Regardless of whether it is generated or uploaded the host signer public key is accessible via the API at the public key endpoint 1 Extend host key certificate TTLs text vault secrets tune max lease ttl 87600h ssh host signer 1 Create a role for signing host keys Be sure to fill in the list of allowed domains set allow bare domains or both text vault write ssh host signer roles hostrole key type ca algorithm signer rsa sha2 256 ttl 87600h allow host certificates true allowed domains localdomain example com allow subdomains true 1 Sign the host s SSH public key text vault write ssh host signer sign hostrole cert type host public key etc ssh ssh host rsa key pub Key Value serial number 3746eb17371540d9 signed key ssh rsa cert v01 openssh com AAAAHHNzaC1y 1 Set the resulting signed certificate as HostCertificate in the SSH configuration on the host machine text vault write field signed key ssh host signer sign hostrole cert type host public key etc ssh ssh host rsa key pub etc ssh ssh host rsa key cert pub Set permissions on the certificate to be 0640 text chmod 0640 etc ssh ssh host rsa key cert pub Add host key and host certificate to the SSH configuration file text etc ssh sshd config For client keys TrustedUserCAKeys etc ssh trusted user ca keys pem For host keys HostKey etc ssh ssh host rsa key HostCertificate etc ssh ssh host rsa key cert pub Restart the SSH service to pick up the changes Client Side host verification 1 Retrieve the host signing CA public key to validate the host signature of target machines text curl http 127 0 0 1 8200 v1 ssh host signer public key text vault read field public key ssh host signer config ca 1 Add the resulting public key to the known hosts file with authority text ssh known hosts cert authority example com ssh rsa AAAAB3NzaC1yc2EAAA 1 SSH into target machines as usual Troubleshooting When initially configuring this type of key signing enable VERBOSE SSH logging to help annotate any errors in the log text etc ssh sshd config LogLevel VERBOSE Restart SSH after making these changes By default SSH logs to var log auth log but so do many other things To extract just the SSH logs use the following shell session tail f var log auth log grep line buffered sshd If you are unable to make a connection to the host the SSH server logs may provide guidance and insights Name is not a listed principal If the auth log displays the following messages text var log auth log key cert check authority invalid certificate Certificate invalid name is not a listed principal The certificate does not permit the username as a listed principal for authenticating to the system This is most likely due to an OpenSSH bug see known issues known issues for more information This bug does not respect the allowed users option value of Here are ways to work around this issue 1 Set default user in the role If you are always authenticating as the same user set the default user in the role to the username you are SSHing into the target machine text vault write ssh roles my role EOH default user YOUR USER EOH 1 Set valid principals during signing In situations where multiple users may be authenticating to SSH via Vault set the list of valid principles during key signing to include the current username text vault write ssh client signer sign my role EOH valid principals my user EOH No prompt after login If you do not see a prompt after authenticating to the host machine the signed certificate may not have the permit pty extension There are two ways to add this extension to the signed certificate As part of the role creation text vault write ssh client signer roles my role EOH default extensions permit pty EOH As part of the signing operation itself text vault write ssh client signer sign my role EOH extensions permit pty EOH No port forwarding If port forwarding from the guest to the host is not working the signed certificate may not have the permit port forwarding extension Add the extension as part of the role creation or signing process to enable port forwarding See no prompt after login no prompt after login for examples json default extensions permit port forwarding No x11 forwarding If X11 forwarding from the guest to the host is not working the signed certificate may not have the permit X11 forwarding extension Add the extension as part of the role creation or signing process to enable X11 forwarding See no prompt after login no prompt after login for examples json default extensions permit X11 forwarding No agent forwarding If agent forwarding from the guest to the host is not working the signed certificate may not have the permit agent forwarding extension Add the extension as part of the role creation or signing process to enable agent forwarding See no prompt after login no prompt after login for examples json default extensions permit agent forwarding Key comments There are additional steps needed to preserve comment attributes https www rfc editor org rfc rfc4716 section 3 3 2 in keys which ought to be considered if they are required Private and public key may have comments applied to them and for example where ssh keygen is used with its C parameter similar to shell session ssh keygen C Comments N t rsa b 4096 f host ca Adapted key values containing comments must be provided with the key related parameters as per the Vault CLI and API steps demonstrated below shell extension Using CLI vault secrets enable path hosts ca ssh KEY PRI cat ssh id rsa sed z s n n g KEY PUB cat ssh id rsa pub sed z s n n g Create update keypair in Vault vault write ssh client signer config ca generate signing key false private key KEY PRI public key KEY PUB shell extension Using API curl X POST H X Vault Token d type ssh http 127 0 0 1 8200 v1 sys mounts hosts ca KEY PRI cat ssh id rsa sed z s n n g KEY PUB cat ssh id rsa pub sed z s n n g tee payload json EOF generate signing key false private key KEY PRI public key KEY PUB EOF Create update keypair in Vault curl X POST H X Vault Token d payload json http 127 0 0 1 8200 v1 hosts ca config ca IMPORTANT Do NOT add a private key password since Vault can t decrypt it Destroy the keypair and payload json from your hosts immediately after they have been confirmed as successfully uploaded Known issues On SELinux enforcing systems you may need to adjust related types so that the SSH daemon is able to read it For example adjust the signed host certificate to be an sshd key t type On some versions of SSH you may get the following error text no separate private key for certificate This is a bug introduced in OpenSSH version 7 2 and fixed in 7 5 See OpenSSH bug 2617 https bugzilla mindrot org show bug cgi id 2617 for details On some versions of SSH you may get the following error on target host text userauth pubkey certificate signature algorithm ssh rsa signature algorithm not supported preauth Fix is to add below line to etc ssh sshd config text CASignatureAlgorithms ssh rsa The ssh rsa algorithm is no longer supported in OpenSSH 8 2 https www openssh com txt release 8 2 API The SSH secrets engine has a full HTTP API Please see the SSH secrets engine API vault api docs secret ssh for more details |
vault More information on the Tokenization transform Not to be confused with Vault tokens Tokenization exchanges a page title Transform Secrets Engines Tokenization layout docs Tokenization transform | ---
layout: docs
page_title: Transform - Secrets Engines - Tokenization
description: >-
More information on the Tokenization transform.
---
# Tokenization transform
Not to be confused with Vault tokens, Tokenization exchanges a
sensitive value for an unrelated value called a _token_. The original sensitive
value cannot be recovered from a token alone, they are irreversible. Instead,
unlike format preserving encryption, tokenization is stateful. To decode the
original value, the token must be submitted to Vault where it is
retrieved from a cryptographic mapping in storage.
## Operation
On encode, Vault generates a random, signed token and stores a mapping of a
version of that token to encrypted versions of the plaintext and metadata, as
well as a fingerprint of the original plaintext which facilitates the `tokenized`
endpoint that lets one query whether a plaintext exists in the system.
Depending on the mapping mode, the plaintext may be decoded only with possession
of the distributed token, or may be recoverable in the export operation. See
[Security Considerations](#security-considerations) for more.
Tokenization's cryptosystem uses AES256-GCM96 for encryption of its token
store, with keys derived from the token and a tokenization root key.
### Convergence
By default, tokenization produces a unique token for every encode operation.
This makes the resulting token fully independent of its plaintext and expiration.
Sometimes, though, it may be beneficial if the tokenization of a plaintext/expiration
pair tokenizes consistently to the same value. For example if one wants to
do a statistical analysis of the tokens as they relate to some other field
in a database (without decoding the token), or if one needed to tokenize
in two different systems but be able relate the results. In this case,
one can create a tokenization transformation that is *convergent*.
When enabled at transformation creation time, Vault alters the calculation so that
encoding a plaintext and expiration tokenizes to the same value every time, and
storage keeps only a single entry of that token. Like the exportable mapping
mode, convergence should only be enabled if needed. Convergent tokenization
has a small performance penalty in external stores and a larger one in the
built in store due to the need to avoid duplicate entries and to update
metadata when convergently encoding. It is recommended that if one has some
use cases that require convergence and some that do not, one should create two
different tokenization transforms with convergence enabled on only one.
### Token lookup
Some use cases may want to lookup the value of a token given its plaintext. Ordinarily
this is contrary to the nature of tokenization where we want to prevent the ability
of an attacker to determine that a token corresponds to a plaintext value (a known
plaintext attack). But for use cases that require it, the
[token lookup](/vault/api-docs/secret/transform#token-lookup)
operation is supported, but only in some configurations of the tokenization
transformation. Token lookup is supported when convergence is enabled, or
if the mapping mode is exportable *and* the storage backend is external.
## Performance considerations
### Builtin (Internal) store
As tokenization is stateful, the encode operation necessarily writes values to
storage. By default, that storage is the Vault backend store itself. This
differs from some secret engines in that the encode and decode operations require
an access of storage per operation. Other engines use storage for configuration
but can process operations largely without accessing any storage.
Since these operations involve writes to storage, and therefore must be performed
on primary nodes, the scalability of the encode operation is limited by the
primary's storage performance.
Additionally, using internal storage, since writes must be performed on primary
nodes, the scalability of the encode operation will be limited by the performance
of the primary and its storage subsystem. All other operations can be performed
on secondaries.
Finally, due to replication, writes to the primary may take some time to reach
secondaries, so other read operations like decode or metadata may not succeed on
the secondaries until this happens. In other words, tokenization is eventually
consistent.
### External storage
All nodes (except DRs) can participate in all operations using external storage,
but one must take care to monitor and scale the external storage for the level of
traffic experienced. The storage schema is simple however and well known approaches
should be effective.
## Security considerations
The goal of Tokenization is to let end users' devices store the token rather than
their sensitive values (such as credit card numbers) and still participate in
transactions where the token is a stand-in for the sensitive value. For this reason
the token Vault generates is completely unrelated (e.g. irreversible) to the
sensitive value.
Furthermore, the Tokenization transform is designed to resist a number of attacks
on the values produced during encode. In particular it is designed so that
attackers cannot recover plaintext even if they steal the tokenization values
from Vault itself. In the default mapping mode,
even stealing the underlying transform key does not allow them to recover
the plaintext without also possessing the encoded token. An attacker must have
gotten access to all values in the construct.
In the `exportable` mapping mode however, the plaintext values are encrypted
in a way that can be decrypted within Vault. If the attacker possesses the
transform key and the tokenization mapping values, the plaintext can be
recovered. This mode is available for the case where operators prioritize the
ability to export all of the plaintext values in an emergency, via the
`export-decoded` operation.
### Metadata
Since tokenization isn't format preserving and requires storage, one can associate
arbitrary metadata with a token. Metadata is considered less sensitive than the
original plaintext value. As it has it's own retrieval endpoint, operators can
configure policies that may allow access to the metadata of a token but not
its decoded value to enable workflows that operate just on the metadata.
## TTLs and tidying
By default, tokens are long lived, and the storage for them will be maintained
indefinitely. Where there is a concept of time-to-live, it is strongly encouraged
that the tokens be generated with a TTL. For example, as credit cards
have an expiration date, it is recommended that tokenizing a credit card
primary account number (PAN) be done with a TTL that corresponds to the time
after which the PAN is invalid.
This allows such values to be _tidied_ and removed from storage once expired.
Tokens themselves encode the expiration time, so decode and other operations
can immediately reject the operation when presented with an expired token.
## Storage
### External SQL stores
Currently the PostgreSQL, MySQL, and MSSQL relational databases are supported as
external storage backends for tokenization.
The [Schema Endpoint](/vault/api-docs/secret/transform#create-update-store-schema)
may be used to initialize and upgrade the necessary database tables. Vault uses
a schema versioning table to determine if it needs to create or modify the
tables when using that endpoint. If you make changes to those tables yourself,
the automatic schema management may become out of sync and may fail in the future.
External stores may often be preferred due to their ability to achieve a much
higher scale of performance, especially when used with batch operations.
### Snapshot/Restore
Snapshot allows one to iteratively retrieve the tokenization state, for
backup or migration purposes. The resulting data can be fed to the restore
endpoint of the same or a different tokenization store. Note that the state
is only useable by the tokenization transform that created it, as state is
encrypted via keys in that configured transform.
### Export decoded
For stores configured with the `exportable` mapping mode, the export decoded
endpoint allows operators to retrieve the _decoded_ contents of tokenization
state, which includes tokens and their decoded, sensitive values. The
`exportable` mode is only recommended if this use case is required, as the default
cannot be decoded by attackers even if they gain access to Vault's storage and
keys.
### Migration
Tokenization stores are configured separately from the tokenization transform,
and the transform can point to multiple stores. The primary use case for this
one-to-many relationship is to facilitate migration between two tokenization
stores.
When multiple stores are configured, Vault writes new tokenization state to all
configured stores, and reads from each store in the order they were configured.
Thus, one can use multiple configured stores along with the snapshot/restore
functionality to perform a zero-downtime migration to a new store:
1. Configure the new tokenization store in the API.
1. Modify the existing tokenization transform to use both the existing and new
store.
1. Snapshot the old store.
1. Restore the snapshot to the new store.
1. Perform any desired validations.
1. Modify the tokenization transform to use only the new store.
## Key management
Tokenization supports key rotation. Keys are tied to transforms, so key
names are the same as the name of the corresponding tokenization transform.
Keys can be rotated to a new version, with backward compatibility for
decoding. Encoding is always performed with the newest key version. Keys versions
can be tidied as well. Keys may also be rotated automatically on a user-defined
time interval, specified by the `auto_rotate_field` of the key config. For more
information, see the [transform api docs](/vault/api-docs/secret/transform).
## Tutorial
Refer to [Tokenize Data with Transform Secrets
Engine](/vault/tutorials/adp/tokenization) for a
step-by-step tutorial. | vault | layout docs page title Transform Secrets Engines Tokenization description More information on the Tokenization transform Tokenization transform Not to be confused with Vault tokens Tokenization exchanges a sensitive value for an unrelated value called a token The original sensitive value cannot be recovered from a token alone they are irreversible Instead unlike format preserving encryption tokenization is stateful To decode the original value the token must be submitted to Vault where it is retrieved from a cryptographic mapping in storage Operation On encode Vault generates a random signed token and stores a mapping of a version of that token to encrypted versions of the plaintext and metadata as well as a fingerprint of the original plaintext which facilitates the tokenized endpoint that lets one query whether a plaintext exists in the system Depending on the mapping mode the plaintext may be decoded only with possession of the distributed token or may be recoverable in the export operation See Security Considerations security considerations for more Tokenization s cryptosystem uses AES256 GCM96 for encryption of its token store with keys derived from the token and a tokenization root key Convergence By default tokenization produces a unique token for every encode operation This makes the resulting token fully independent of its plaintext and expiration Sometimes though it may be beneficial if the tokenization of a plaintext expiration pair tokenizes consistently to the same value For example if one wants to do a statistical analysis of the tokens as they relate to some other field in a database without decoding the token or if one needed to tokenize in two different systems but be able relate the results In this case one can create a tokenization transformation that is convergent When enabled at transformation creation time Vault alters the calculation so that encoding a plaintext and expiration tokenizes to the same value every time and storage keeps only a single entry of that token Like the exportable mapping mode convergence should only be enabled if needed Convergent tokenization has a small performance penalty in external stores and a larger one in the built in store due to the need to avoid duplicate entries and to update metadata when convergently encoding It is recommended that if one has some use cases that require convergence and some that do not one should create two different tokenization transforms with convergence enabled on only one Token lookup Some use cases may want to lookup the value of a token given its plaintext Ordinarily this is contrary to the nature of tokenization where we want to prevent the ability of an attacker to determine that a token corresponds to a plaintext value a known plaintext attack But for use cases that require it the token lookup vault api docs secret transform token lookup operation is supported but only in some configurations of the tokenization transformation Token lookup is supported when convergence is enabled or if the mapping mode is exportable and the storage backend is external Performance considerations Builtin Internal store As tokenization is stateful the encode operation necessarily writes values to storage By default that storage is the Vault backend store itself This differs from some secret engines in that the encode and decode operations require an access of storage per operation Other engines use storage for configuration but can process operations largely without accessing any storage Since these operations involve writes to storage and therefore must be performed on primary nodes the scalability of the encode operation is limited by the primary s storage performance Additionally using internal storage since writes must be performed on primary nodes the scalability of the encode operation will be limited by the performance of the primary and its storage subsystem All other operations can be performed on secondaries Finally due to replication writes to the primary may take some time to reach secondaries so other read operations like decode or metadata may not succeed on the secondaries until this happens In other words tokenization is eventually consistent External storage All nodes except DRs can participate in all operations using external storage but one must take care to monitor and scale the external storage for the level of traffic experienced The storage schema is simple however and well known approaches should be effective Security considerations The goal of Tokenization is to let end users devices store the token rather than their sensitive values such as credit card numbers and still participate in transactions where the token is a stand in for the sensitive value For this reason the token Vault generates is completely unrelated e g irreversible to the sensitive value Furthermore the Tokenization transform is designed to resist a number of attacks on the values produced during encode In particular it is designed so that attackers cannot recover plaintext even if they steal the tokenization values from Vault itself In the default mapping mode even stealing the underlying transform key does not allow them to recover the plaintext without also possessing the encoded token An attacker must have gotten access to all values in the construct In the exportable mapping mode however the plaintext values are encrypted in a way that can be decrypted within Vault If the attacker possesses the transform key and the tokenization mapping values the plaintext can be recovered This mode is available for the case where operators prioritize the ability to export all of the plaintext values in an emergency via the export decoded operation Metadata Since tokenization isn t format preserving and requires storage one can associate arbitrary metadata with a token Metadata is considered less sensitive than the original plaintext value As it has it s own retrieval endpoint operators can configure policies that may allow access to the metadata of a token but not its decoded value to enable workflows that operate just on the metadata TTLs and tidying By default tokens are long lived and the storage for them will be maintained indefinitely Where there is a concept of time to live it is strongly encouraged that the tokens be generated with a TTL For example as credit cards have an expiration date it is recommended that tokenizing a credit card primary account number PAN be done with a TTL that corresponds to the time after which the PAN is invalid This allows such values to be tidied and removed from storage once expired Tokens themselves encode the expiration time so decode and other operations can immediately reject the operation when presented with an expired token Storage External SQL stores Currently the PostgreSQL MySQL and MSSQL relational databases are supported as external storage backends for tokenization The Schema Endpoint vault api docs secret transform create update store schema may be used to initialize and upgrade the necessary database tables Vault uses a schema versioning table to determine if it needs to create or modify the tables when using that endpoint If you make changes to those tables yourself the automatic schema management may become out of sync and may fail in the future External stores may often be preferred due to their ability to achieve a much higher scale of performance especially when used with batch operations Snapshot Restore Snapshot allows one to iteratively retrieve the tokenization state for backup or migration purposes The resulting data can be fed to the restore endpoint of the same or a different tokenization store Note that the state is only useable by the tokenization transform that created it as state is encrypted via keys in that configured transform Export decoded For stores configured with the exportable mapping mode the export decoded endpoint allows operators to retrieve the decoded contents of tokenization state which includes tokens and their decoded sensitive values The exportable mode is only recommended if this use case is required as the default cannot be decoded by attackers even if they gain access to Vault s storage and keys Migration Tokenization stores are configured separately from the tokenization transform and the transform can point to multiple stores The primary use case for this one to many relationship is to facilitate migration between two tokenization stores When multiple stores are configured Vault writes new tokenization state to all configured stores and reads from each store in the order they were configured Thus one can use multiple configured stores along with the snapshot restore functionality to perform a zero downtime migration to a new store 1 Configure the new tokenization store in the API 1 Modify the existing tokenization transform to use both the existing and new store 1 Snapshot the old store 1 Restore the snapshot to the new store 1 Perform any desired validations 1 Modify the tokenization transform to use only the new store Key management Tokenization supports key rotation Keys are tied to transforms so key names are the same as the name of the corresponding tokenization transform Keys can be rotated to a new version with backward compatibility for decoding Encoding is always performed with the newest key version Keys versions can be tidied as well Keys may also be rotated automatically on a user defined time interval specified by the auto rotate field of the key config For more information see the transform api docs vault api docs secret transform Tutorial Refer to Tokenize Data with Transform Secrets Engine vault tutorials adp tokenization for a step by step tutorial |
vault include alerts enterprise and hcp mdx page title Transform Secrets Engines layout docs Transform secrets engine The Transform secrets engine for Vault performs secure data transformation | ---
layout: docs
page_title: Transform - Secrets Engines
description: >-
The Transform secrets engine for Vault performs secure data transformation.
---
# Transform secrets engine
@include 'alerts/enterprise-and-hcp.mdx'
Transform secrets engine requires [Vault
Enterprise](https://www.hashicorp.com/products/vault/pricing) with the Advanced Data
Protection Transform (ADP-Transform) module.
The Transform secrets engine handles secure data transformation and tokenization
against provided input value. Transformation methods may encompass NIST vetted
cryptographic standards such as [format-preserving encryption
(FPE)](https://en.wikipedia.org/wiki/Format-preserving_encryption) via
[FF3-1](https://csrc.nist.gov/publications/detail/sp/800-38g/rev-1/draft), but
can also be pseudonymous transformations of the data through other means, such
as masking.
The secret engine currently supports `fpe`, `masking`, and `tokenization` as
data transformation types.
## Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the Transform secrets engine:
```text
$ vault secrets enable transform
Success! Enabled the transform secrets engine at: transform/
```
By default, the secrets engine will mount at the name of the engine. To enable
the secrets engine at a different path, use the -path argument.
1. Create a named role:
```text
$ vault write transform/role/payments transformations=ccn-fpe
Success! Data written to: transform/role/payments
```
1. Create a transformation:
```text
$ vault write transform/transformations/fpe/ccn-fpe \
template=ccn \
tweak_source=internal \
allowed_roles=payments
Success! Data written to: transform/transformations/fpe/ccn-fpe
```
1. Optionally, create a template:
```text
$ vault write transform/template/ccn \
type=regex \
pattern='(\d{4})[- ](\d{4})[- ](\d{4})[- ](\d{4})' \
encode_format='$1-$2-$3-$4' \
decode_formats=last-four='$4' \
alphabet=numerics
Success! Data written to: transform/template/ccn
```
1. Optionally, create an alphabet:
```text
$ vault write transform/alphabet/numerics \
alphabet="0123456789"
Success! Data written to: transform/alphabet/numerics
```
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can use this secrets engine to encode and decode input
values.
1. Encode some input value using the `/encode` endpoint with a named role:
```text
$ vault write transform/encode/payments value=1111-2222-3333-4444
Key Value
--- -----
encoded_value 9300-3376-4943-8903
```
A transformation must be provided if the role contains more than one
transformation. A tweak must be provided if the tweak source for the
transformation is "supplied".
1. Decode some input value using the `/decode` endpoint with a named role:
```text
$ vault write transform/decode/payments value=9300-3376-4943-8903
Key Value
--- -----
decoded_value 1111-2222-3333-4444
```
A transformation must be provided if the role contains more than one
transformation. A tweak must be provided if the tweak source for the
transformation is "supplied" or "generated".
1. Decode some input value using the `/decode` endpoint with a named role and decode format:
```text
$ vault write transform/decode/payments/last-four value=9300-3376-4943-8903
Key Value
--- -----
decoded_value 4444
```
A transformation must be provided if the role contains more than one
transformation. A tweak must be provided if the tweak source for the
transformation is "supplied" or "generated". A decode format can optionally
be provided. If one isn't provided, the decoded output will be formatted to
match the template's pattern as in the previous example.
## Roles, transformations, templates, and alphabets
The Transform secrets engine contains several types of resources that
encapsulate different aspects of the information required in order to perform
data transformation.
- **Roles** are the basic high-level construct that holds the set of
transformation that it is allowed to performed. The role name is provided when
performing encode and decode operations.
- **Transformations** hold information about a particular transformation. It
contains information about the type of transformation that we want to perform,
the template that it should use for value detection, and other
transformation-specific values such as the tweak source or the masking character
to use.
- **Templates** allow us to determine what and how to capture the value that we
want to transform.
- **Alphabets** provide the set of valid UTF-8 character contained within both
the input and transformed value on FPE transformations.
## Transformations
### Format preserving encryption
Format Preserving Encryption (FPE) performs cryptographically secure
transformation via FF3-1 to encode input values while maintaining its data
format and length. FF3-1 is a construction that uses AES-256 for
encryption.
#### Tweak and tweak source
FF3-1 uses a non-confidential parameter called the tweak along with the
ciphertext when performing encryption and decryption operations. The tweak
is precisely a 7-byte value. The secret engine consumes a base64 encoded string
of this value for its encode and decode operation whenever this input is
required.
In order to simplify the flow of encoding and decoding operations, transformation
creation can take care of generating and associating a tweak value. This allows
applications to provide a single value without having the need to generate or
store any other metadata.
In cases where more granularity is required, a tweak value can be generated by
Vault and returned, or it may be independently generated and provided.
In summary, there are three ways in which the tweak value may be sourced:
- `supplied`: This is the default behavior for FPE transformations. The tweak
value must be generated externally, and supplied into the on encode and decode
operations.
- `generated`: The secret engine will take care of generating the tweak value
on encode operations and return this back as part of the response along
with the encoded value. It is up to the application to store this value
so that it can be provided back when decoding the encoded value.
- `internal`: The secret engine will generate an internal tweak value per
transformation. This value is not returned on encode or decode operations
since it gets re-used for all encode and decode operations for the
transformation. Depending on the uniqueness of the dataset, this mode may
introduce higher risks, but provides the most convenience since the value does
not need to be stored separately. This mode should only be used if the values
being encoded are sufficiently unique.
Your team and organization should weigh the trade-offs when it comes to
choosing the proper tweak source to use. For `supplied` and `internal`
sourcing, please see [FF3-1 Tweak Usage Details](/vault/docs/secrets/transform/ff3-tweak-details)
#### Input limits
FF3-1 specifies both minimum and maximum limits on the length of an input.
These limits are driven by the security goals, making sure that for a given
alphabet the input size does not leave the input guessable by brute force.
Given an alphabet of length A, an input length L is valid if:
- L >= 2,
- A<sup>L</sup> >= 1,000,000
- and L <= 2 \* floor(log<sub>A</sub>(2<sup>96</sup>)).
As a concrete example, for handling credit card numbers, A is 10, L is 16, so
valid input lengths would be between 6 and 56 characters. This is because
10<sup>6</sup>=1,000,000 (already greater than 2), and 2 \* floor(log<sub>10</sub>(2<sup>96</sup>)) = 56.
Of course, in the case of credit card numbers valid input would always be
between 12 and 19 decimal digits.
#### Output limitations
After transformation and formatting by the template, the value is an encrypted
version of the input with the format preserved. However, the value itself may
be _invalid_ with respect to other standards. For example the output credit card
number may not validate (it likely won't create a valid check digit).
So one must consider when the outputs are stored whether validation in storage
may reject them.
### Masking
Masking performs replacement of matched characters on the input value with a
desired character. This form of transformation is non-reversible and thus does
not support retrieving the original value back using the decode operation.
### Tokenization
[Tokenization](/vault/docs/secrets/transform/tokenization) exchanges a
sensitive value for an unrelated value called a _token_. The original sensitive
value cannot be recovered from a token alone, they are irreversible.
#### Inputs
Tokenization inputs are not processed by templates or alphabets, as they do not
preserve any of the contents or format of the input.
#### Outputs
Tokenization is not format preserving. The token output is a Base58 encoded
string value of unrelated length, and is not rendered by a template.
The decoded value is returned verbatim as it was before encoding.
#### Metadata
As tokenization isn't format preserving and is stateful, the input values can be
any length, subject to other limits in Vault's request processing. In addition,
non-sensitive _metadata_ can be encoded alongside the value, and retrieved either
with or independently of the original value.
#### Operations
In addition to encode and decode, as tokenization is stateful, it provides two
additional operations:
- Retrieve metadata given a token.
- Check whether an input value has a valid, unexpired token.
- For some configurations, retrieve a previously encoded token for a plaintext
input.
#### Stores
Tokenization is stateful. Tokenized state can be stored internally (the
default) or in an external store. Currently only PostgreSQL, MySQL, and MSSQL are supported
for external storage.
#### Mapping modes
[Tokenization](/vault/docs/secrets/transform/tokenization) stores the results of an encode operation
in storage using a cryptographic construct that enhances the safety of its values.
In the `default` mapping mode, the token itself is transformed via a one way
function involving the transform key and elements of the token. As Vault does
not store the token, the values in Vault storage themselves cannot be used to
retrieve original input.
A second mapping mode, `exportable` is provided for cases where
operators may need to recover the full set of decoded inputs in an emergency via
the export operation. It is strongly recommended that one use the `default` mode if
possible, as it is resistant to more types of attack.
#### Convergent tokenization
~> **Note:** Convergent tokenization is not supported for transformations with
imported keys.
In addition, tokenization transformations may be configured as *convergent*, meaning
that tokenizing a plaintext and expiration more than once results in the
same token value. Enabling convergence has performance and security
[considerations](/vault/docs/secrets/transform/tokenization#convergence).
## Deletion behavior
The deletion of resources, aside from roles, is guarded by checking whether any
other related resources are currently using it in order to avoid accidental data
loss of any encoded value that may depend on these bits of information to
decode and reconstruct the original value. Role deletion can be safely done
since the information related to the transformation itself is contained within
transformation object and its related resources.
The following rules applies when it comes to deleting a resource:
- A transformation cannot be deleted if it's in use by a role.
- A template or store cannot be deleted if it's in use by a transformation.
- An alphabet cannot be deleted if it's in use by a template.
## Provided builtin resources
The secret engine provides a set of builtin templates and alphabets that are
considered common. Builtin templates cannot be deleted, and the prefix
"builtin/" on template and alphabet names is a reserved keyword.
### Templates
The following builtin templates are available for use in the secret engine:
- builtin/creditcardnumber
- builtin/socialsecuritynumber
Note that these templates only check for the matching pattern(s), and not the
validity of the value itself. For instance, the builtin credit card number
template can determine whether the provided value is in the format of commonly
issued credit cards, but not whether the credit card is a valid number from a
particular issuer.
Templates currently only accept regular expressions as the matching pattern
type. It uses Go's standard library for the regexp engine, which supports
[the RE2 syntax](https://github.com/google/re2/wiki/Syntax).
**Note**: The `builtin/any` template is only valid and is the default for the tokenization
transform.
### Alphabets
The following builtin alphabets are available for use in the secret engine:
- builtin/numeric
- builtin/alphalower
- builtin/alphaupper
- builtin/alphanumericlower
- builtin/alphanumericupper
- builtin/alphanumeric
Custom alphabets must contain between 2 and 65536 unique characters.
### Stores
The following builtin store is available (and is the default) for tokenization
transformations:
- builtin/internal
## Tutorial
Refer to the [Transform Secrets Engine](/vault/tutorials/adp/transform) tutorial to learn how to use the Transform secrets engine to handle secure data transmission and tokenization against provided secrets.
## Bring your own key (BYOK)
~> **Note:** Key import functionality supports cases where there is a need to bring
in an existing key from an HSM or other outside systems. It is more secure to
have Transform generate and manage a key within Vault.
### Via the Command Line
The Vault command line tool [includes a helper](/vault/docs/commands/transform/) to perform the steps described
in Manual below.
### Via the API
First, the wrapping key needs to be read from the transform secrets engine:
```text
$ vault read transform/wrapping_key
```
The wrapping key will be a 4096-bit RSA public key.
Then, the wrapping key is used to create the ciphertext input for the `import` endpoint,
as described below. The target key refers to the key being imported.
### HSM
If the key is being imported from an HSM that supports PKCS#11, there are
two possible scenarios:
- If the HSM supports the CKM_RSA_AES_KEY_WRAP mechanism, it can be used to wrap the
target key using the wrapping key.
- Otherwise, two mechanisms can be combined to wrap the target key. First, a 256-bit AES key is
generated and then used to wrap the target key using the CKM_AES_KEY_WRAP_KWP mechanism.
Then the AES key should be wrapped under the wrapping key using the CKM_RSA_PKCS_OAEP mechanism
using MGF1 and either SHA-1, SHA-224, SHA-256, SHA-384, or SHA-512.
The ciphertext is constructed by appending the wrapped target key to the wrapped AES key.
The ciphertext bytes should be base64-encoded.
### Manual process
If the target key is not stored in an HSM or KMS, the following steps can be used to construct
the ciphertext for the input of the `import` endpoint:
1. Generate an ephemeral 256-bit AES key.
2. Wrap the target key using the ephemeral AES key with AES-KWP.
3. Wrap the AES key under the Vault wrapping key using RSAES-OAEP with MGF1 and
either SHA-1, SHA-224, SHA-256, SHA-384, or SHA-512.
4. Delete the ephemeral AES key.
5. Append the wrapped target key to the wrapped AES key.
6. Base64 encode the result.
For more details on the key wrapping process, see the [key wrapping guide](/vault/docs/secrets/transit/key-wrapping-guide)
(be sure to use the transform wrapping key when wrapping a key for import into the transform secrets engine).
## API
The Transform secrets engine has a full HTTP API. Please see the
[Transform secrets engine API](/vault/api-docs/secret/transform) for more
details. | vault | layout docs page title Transform Secrets Engines description The Transform secrets engine for Vault performs secure data transformation Transform secrets engine include alerts enterprise and hcp mdx Transform secrets engine requires Vault Enterprise https www hashicorp com products vault pricing with the Advanced Data Protection Transform ADP Transform module The Transform secrets engine handles secure data transformation and tokenization against provided input value Transformation methods may encompass NIST vetted cryptographic standards such as format preserving encryption FPE https en wikipedia org wiki Format preserving encryption via FF3 1 https csrc nist gov publications detail sp 800 38g rev 1 draft but can also be pseudonymous transformations of the data through other means such as masking The secret engine currently supports fpe masking and tokenization as data transformation types Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the Transform secrets engine text vault secrets enable transform Success Enabled the transform secrets engine at transform By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Create a named role text vault write transform role payments transformations ccn fpe Success Data written to transform role payments 1 Create a transformation text vault write transform transformations fpe ccn fpe template ccn tweak source internal allowed roles payments Success Data written to transform transformations fpe ccn fpe 1 Optionally create a template text vault write transform template ccn type regex pattern d 4 d 4 d 4 d 4 encode format 1 2 3 4 decode formats last four 4 alphabet numerics Success Data written to transform template ccn 1 Optionally create an alphabet text vault write transform alphabet numerics alphabet 0123456789 Success Data written to transform alphabet numerics Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can use this secrets engine to encode and decode input values 1 Encode some input value using the encode endpoint with a named role text vault write transform encode payments value 1111 2222 3333 4444 Key Value encoded value 9300 3376 4943 8903 A transformation must be provided if the role contains more than one transformation A tweak must be provided if the tweak source for the transformation is supplied 1 Decode some input value using the decode endpoint with a named role text vault write transform decode payments value 9300 3376 4943 8903 Key Value decoded value 1111 2222 3333 4444 A transformation must be provided if the role contains more than one transformation A tweak must be provided if the tweak source for the transformation is supplied or generated 1 Decode some input value using the decode endpoint with a named role and decode format text vault write transform decode payments last four value 9300 3376 4943 8903 Key Value decoded value 4444 A transformation must be provided if the role contains more than one transformation A tweak must be provided if the tweak source for the transformation is supplied or generated A decode format can optionally be provided If one isn t provided the decoded output will be formatted to match the template s pattern as in the previous example Roles transformations templates and alphabets The Transform secrets engine contains several types of resources that encapsulate different aspects of the information required in order to perform data transformation Roles are the basic high level construct that holds the set of transformation that it is allowed to performed The role name is provided when performing encode and decode operations Transformations hold information about a particular transformation It contains information about the type of transformation that we want to perform the template that it should use for value detection and other transformation specific values such as the tweak source or the masking character to use Templates allow us to determine what and how to capture the value that we want to transform Alphabets provide the set of valid UTF 8 character contained within both the input and transformed value on FPE transformations Transformations Format preserving encryption Format Preserving Encryption FPE performs cryptographically secure transformation via FF3 1 to encode input values while maintaining its data format and length FF3 1 is a construction that uses AES 256 for encryption Tweak and tweak source FF3 1 uses a non confidential parameter called the tweak along with the ciphertext when performing encryption and decryption operations The tweak is precisely a 7 byte value The secret engine consumes a base64 encoded string of this value for its encode and decode operation whenever this input is required In order to simplify the flow of encoding and decoding operations transformation creation can take care of generating and associating a tweak value This allows applications to provide a single value without having the need to generate or store any other metadata In cases where more granularity is required a tweak value can be generated by Vault and returned or it may be independently generated and provided In summary there are three ways in which the tweak value may be sourced supplied This is the default behavior for FPE transformations The tweak value must be generated externally and supplied into the on encode and decode operations generated The secret engine will take care of generating the tweak value on encode operations and return this back as part of the response along with the encoded value It is up to the application to store this value so that it can be provided back when decoding the encoded value internal The secret engine will generate an internal tweak value per transformation This value is not returned on encode or decode operations since it gets re used for all encode and decode operations for the transformation Depending on the uniqueness of the dataset this mode may introduce higher risks but provides the most convenience since the value does not need to be stored separately This mode should only be used if the values being encoded are sufficiently unique Your team and organization should weigh the trade offs when it comes to choosing the proper tweak source to use For supplied and internal sourcing please see FF3 1 Tweak Usage Details vault docs secrets transform ff3 tweak details Input limits FF3 1 specifies both minimum and maximum limits on the length of an input These limits are driven by the security goals making sure that for a given alphabet the input size does not leave the input guessable by brute force Given an alphabet of length A an input length L is valid if L 2 A sup L sup 1 000 000 and L 2 floor log sub A sub 2 sup 96 sup As a concrete example for handling credit card numbers A is 10 L is 16 so valid input lengths would be between 6 and 56 characters This is because 10 sup 6 sup 1 000 000 already greater than 2 and 2 floor log sub 10 sub 2 sup 96 sup 56 Of course in the case of credit card numbers valid input would always be between 12 and 19 decimal digits Output limitations After transformation and formatting by the template the value is an encrypted version of the input with the format preserved However the value itself may be invalid with respect to other standards For example the output credit card number may not validate it likely won t create a valid check digit So one must consider when the outputs are stored whether validation in storage may reject them Masking Masking performs replacement of matched characters on the input value with a desired character This form of transformation is non reversible and thus does not support retrieving the original value back using the decode operation Tokenization Tokenization vault docs secrets transform tokenization exchanges a sensitive value for an unrelated value called a token The original sensitive value cannot be recovered from a token alone they are irreversible Inputs Tokenization inputs are not processed by templates or alphabets as they do not preserve any of the contents or format of the input Outputs Tokenization is not format preserving The token output is a Base58 encoded string value of unrelated length and is not rendered by a template The decoded value is returned verbatim as it was before encoding Metadata As tokenization isn t format preserving and is stateful the input values can be any length subject to other limits in Vault s request processing In addition non sensitive metadata can be encoded alongside the value and retrieved either with or independently of the original value Operations In addition to encode and decode as tokenization is stateful it provides two additional operations Retrieve metadata given a token Check whether an input value has a valid unexpired token For some configurations retrieve a previously encoded token for a plaintext input Stores Tokenization is stateful Tokenized state can be stored internally the default or in an external store Currently only PostgreSQL MySQL and MSSQL are supported for external storage Mapping modes Tokenization vault docs secrets transform tokenization stores the results of an encode operation in storage using a cryptographic construct that enhances the safety of its values In the default mapping mode the token itself is transformed via a one way function involving the transform key and elements of the token As Vault does not store the token the values in Vault storage themselves cannot be used to retrieve original input A second mapping mode exportable is provided for cases where operators may need to recover the full set of decoded inputs in an emergency via the export operation It is strongly recommended that one use the default mode if possible as it is resistant to more types of attack Convergent tokenization Note Convergent tokenization is not supported for transformations with imported keys In addition tokenization transformations may be configured as convergent meaning that tokenizing a plaintext and expiration more than once results in the same token value Enabling convergence has performance and security considerations vault docs secrets transform tokenization convergence Deletion behavior The deletion of resources aside from roles is guarded by checking whether any other related resources are currently using it in order to avoid accidental data loss of any encoded value that may depend on these bits of information to decode and reconstruct the original value Role deletion can be safely done since the information related to the transformation itself is contained within transformation object and its related resources The following rules applies when it comes to deleting a resource A transformation cannot be deleted if it s in use by a role A template or store cannot be deleted if it s in use by a transformation An alphabet cannot be deleted if it s in use by a template Provided builtin resources The secret engine provides a set of builtin templates and alphabets that are considered common Builtin templates cannot be deleted and the prefix builtin on template and alphabet names is a reserved keyword Templates The following builtin templates are available for use in the secret engine builtin creditcardnumber builtin socialsecuritynumber Note that these templates only check for the matching pattern s and not the validity of the value itself For instance the builtin credit card number template can determine whether the provided value is in the format of commonly issued credit cards but not whether the credit card is a valid number from a particular issuer Templates currently only accept regular expressions as the matching pattern type It uses Go s standard library for the regexp engine which supports the RE2 syntax https github com google re2 wiki Syntax Note The builtin any template is only valid and is the default for the tokenization transform Alphabets The following builtin alphabets are available for use in the secret engine builtin numeric builtin alphalower builtin alphaupper builtin alphanumericlower builtin alphanumericupper builtin alphanumeric Custom alphabets must contain between 2 and 65536 unique characters Stores The following builtin store is available and is the default for tokenization transformations builtin internal Tutorial Refer to the Transform Secrets Engine vault tutorials adp transform tutorial to learn how to use the Transform secrets engine to handle secure data transmission and tokenization against provided secrets Bring your own key BYOK Note Key import functionality supports cases where there is a need to bring in an existing key from an HSM or other outside systems It is more secure to have Transform generate and manage a key within Vault Via the Command Line The Vault command line tool includes a helper vault docs commands transform to perform the steps described in Manual below Via the API First the wrapping key needs to be read from the transform secrets engine text vault read transform wrapping key The wrapping key will be a 4096 bit RSA public key Then the wrapping key is used to create the ciphertext input for the import endpoint as described below The target key refers to the key being imported HSM If the key is being imported from an HSM that supports PKCS 11 there are two possible scenarios If the HSM supports the CKM RSA AES KEY WRAP mechanism it can be used to wrap the target key using the wrapping key Otherwise two mechanisms can be combined to wrap the target key First a 256 bit AES key is generated and then used to wrap the target key using the CKM AES KEY WRAP KWP mechanism Then the AES key should be wrapped under the wrapping key using the CKM RSA PKCS OAEP mechanism using MGF1 and either SHA 1 SHA 224 SHA 256 SHA 384 or SHA 512 The ciphertext is constructed by appending the wrapped target key to the wrapped AES key The ciphertext bytes should be base64 encoded Manual process If the target key is not stored in an HSM or KMS the following steps can be used to construct the ciphertext for the input of the import endpoint 1 Generate an ephemeral 256 bit AES key 2 Wrap the target key using the ephemeral AES key with AES KWP 3 Wrap the AES key under the Vault wrapping key using RSAES OAEP with MGF1 and either SHA 1 SHA 224 SHA 256 SHA 384 or SHA 512 4 Delete the ephemeral AES key 5 Append the wrapped target key to the wrapped AES key 6 Base64 encode the result For more details on the key wrapping process see the key wrapping guide vault docs secrets transit key wrapping guide be sure to use the transform wrapping key when wrapping a key for import into the transform secrets engine API The Transform secrets engine has a full HTTP API Please see the Transform secrets engine API vault api docs secret transform for more details |
vault PKI secrets engine quick start root CA setup This document provides a brief overview of setting up a Vault PKI Secrets The PKI secrets engine for Vault generates TLS certificates layout docs Engine with a Root CA certificate page title PKI Secrets Engines Quick Start Root CA Setup | ---
layout: docs
page_title: 'PKI - Secrets Engines: Quick Start: Root CA Setup'
description: The PKI secrets engine for Vault generates TLS certificates.
---
# PKI secrets engine - quick start - root CA setup
This document provides a brief overview of setting up a Vault PKI Secrets
Engine with a Root CA certificate.
#### Mount the backend
The first step to using the PKI backend is to mount it. Unlike the `kv`
backend, the `pki` backend is not mounted by default.
```shell-session
$ vault secrets enable pki
Successfully mounted 'pki' at 'pki'!
```
#### Configure a CA certificate
Next, Vault must be configured with a CA certificate and associated private
key. We'll take advantage of the backend's self-signed root generation support,
but Vault also supports generating an intermediate CA (with a CSR for signing)
or setting a PEM-encoded certificate and private key bundle directly into the
backend.
Generally you'll want a root certificate to only be used to sign CA
intermediate certificates, but for this example we'll proceed as if you will
issue certificates directly from the root. As it's a root, we'll want to set a
long maximum life time for the certificate; since it honors the maximum mount
TTL, first we adjust that:
```shell-session
$ vault secrets tune -max-lease-ttl=87600h pki
Successfully tuned mount 'pki'!
```
That sets the maximum TTL for secrets issued from the mount to 10 years. (Note
that roles can further restrict the maximum TTL.)
Now, we generate our root certificate:
```shell-session
$ vault write pki/root/generate/internal common_name=myvault.com ttl=87600h
Key Value
--- -----
certificate -----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUJqrw/9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx
MjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAKY/vJ6sRFym+yFYUneoVtDmOCaDKAQiGzQw0IXL
wgMBBb82iKpYj5aQjXZGIl+VkVnCi+M2AQ/iYXWZf1kTAdle4A6OC4+VefSIa2b4
eB7R8aiGTce62jB95+s5/YgrfIqk6igfpCSXYLE8ubNDA2/+cqvjhku1UzlvKBX2
hIlgWkKlrsnybHN+B/3Usw9Km/87rzoDR3OMxLV55YPHiq6+olIfSSwKAPjH8LZm
uM1ITLG3WQUl8ARF17Dj+wOKqbUG38PduVwKL5+qPksrvNwlmCP7Kmjncc6xnYp6
5lfr7V4DC/UezrJYCIb0g/SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM
kxRZA2wR4f/yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f/yNhQUMBYG
A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn
7mvD2+sr6lx4DW/vJwVSW8eTuLtOLNu6/aFhcgTY/OOB8q4n6iHuLrEt8/RV7RJI
obRx74SfK9BcOLt4+DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF+ZphN
nNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI/CJoBb+P5Ahk6krc
LZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C+/1CDc9YL
zjq+8nI2ooIrj4ZKZCOm2fKd1KeGN/CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl/1V5
BT55jevSPVVu
-----END CERTIFICATE-----
expiration 1828121029
issuing_ca -----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUJqrw/9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx
MjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAKY/vJ6sRFym+yFYUneoVtDmOCaDKAQiGzQw0IXL
wgMBBb82iKpYj5aQjXZGIl+VkVnCi+M2AQ/iYXWZf1kTAdle4A6OC4+VefSIa2b4
eB7R8aiGTce62jB95+s5/YgrfIqk6igfpCSXYLE8ubNDA2/+cqvjhku1UzlvKBX2
hIlgWkKlrsnybHN+B/3Usw9Km/87rzoDR3OMxLV55YPHiq6+olIfSSwKAPjH8LZm
uM1ITLG3WQUl8ARF17Dj+wOKqbUG38PduVwKL5+qPksrvNwlmCP7Kmjncc6xnYp6
5lfr7V4DC/UezrJYCIb0g/SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM
kxRZA2wR4f/yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f/yNhQUMBYG
A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn
7mvD2+sr6lx4DW/vJwVSW8eTuLtOLNu6/aFhcgTY/OOB8q4n6iHuLrEt8/RV7RJI
obRx74SfK9BcOLt4+DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF+ZphN
nNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI/CJoBb+P5Ahk6krc
LZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C+/1CDc9YL
zjq+8nI2ooIrj4ZKZCOm2fKd1KeGN/CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl/1V5
BT55jevSPVVu
-----END CERTIFICATE-----
serial_number 26:aa:f0:ff:d1:03:65:ba:78:0c:4c:5a:2e:38:74:bd:20:07:c8:18
```
The returned certificate is purely informational; it and its private key are
safely stored in the backend mount.
#### Set URL configuration
Generated certificates can have the CRL location and the location of the
issuing certificate encoded. These values must be set manually and typically to FQDN associated to the Vault server, but can be changed at any time.
```shell-session
$ vault write pki/config/urls issuing_certificates="http://vault.example.com:8200/v1/pki/ca" crl_distribution_points="http://vault.example.com:8200/v1/pki/crl"
Success! Data written to: pki/ca/urls
```
#### Configure a role
The next step is to configure a role. A role is a logical name that maps to a
policy used to generate those credentials. For example, let's create an
"example-dot-com" role:
```shell-session
$ vault write pki/roles/example-dot-com \
allowed_domains=example.com \
allow_subdomains=true max_ttl=72h
Success! Data written to: pki/roles/example-dot-com
```
#### Issue certificates
By writing to the `roles/example-dot-com` path we are defining the
`example-dot-com` role. To generate a new certificate, we simply write
to the `issue` endpoint with that role name: Vault is now configured to create
and manage certificates!
```shell-session
$ vault write pki/issue/example-dot-com \
common_name=blah.example.com
Key Value
--- -----
certificate -----BEGIN CERTIFICATE-----
MIIDvzCCAqegAwIBAgIUWQuvpMpA2ym36EoiYyf3Os5UeIowDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyNDA1WhcNMTcx
MjExMTkyNDM1WjAbMRkwFwYDVQQDExBibGFoLmV4YW1wbGUuY29tMIIBIjANBgkq
hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1CU93lVgcLXGPxRGTRT3GM5wqytCo7Z6
gjfoHyKoPCAqjRdjsYgp1FMvumNQKjUat5KTtr2fypbOnAURDCh4bN/omcj7eAqt
ldJ8mf8CtKUaaJ1kp3R6RRFY/u96BnmKUG8G7oDeEDsKlXuEuRcNbGlGF8DaM/O1
HFa57cM/8yFB26Nj5wBoG5Om6ee5+W+14Qee8AB6OJbsf883Z+zvhJTaB0QM4ZUq
uAMoMVEutWhdI5EFm5OjtMeMu2U+iJl2XqqgQ/JmLRjRdMn1qd9TzTaVSnjoZ97s
jHK444Px1m45einLqKUJ+Ia2ljXYkkItJj9Ut6ZSAP9fHlAtX84W3QIDAQABo4H/
MIH8MA4GA1UdDwEB/wQEAwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUH
AwIwHQYDVR0OBBYEFH/YdObW6T94U0zuU5hBfTfU5pt1MB8GA1UdIwQYMBaAFECK
dYM4gDbMkxRZA2wR4f/yNhQUMDsGCCsGAQUFBwEBBC8wLTArBggrBgEFBQcwAoYf
aHR0cDovLzEyNy4wLjAuMTo4MjAwL3YxL3BraS9jYTAbBgNVHREEFDASghBibGFo
LmV4YW1wbGUuY29tMDEGA1UdHwQqMCgwJqAkoCKGIGh0dHA6Ly8xMjcuMC4wLjE6
ODIwMC92MS9wa2kvY3JsMA0GCSqGSIb3DQEBCwUAA4IBAQCDXbHV68VayweB2tkb
KDdCaveaTULjCeJUnm9UT/6C0YqC/RxTAjdKFrilK49elOA3rAtEL6dmsDP2yH25
ptqi2iU+y99HhZgu0zkS/p8elYN3+l+0O7pOxayYXBkFf5t0TlEWSTb7cW+Etz/c
MvSqx6vVvspSjB0PsA3eBq0caZnUJv2u/TEiUe7PPY0UmrZxp/R/P/kE54yI3nWN
4Cwto6yUwScOPbVR1d3hE2KU2toiVkEoOk17UyXWTokbG8rG0KLj99zu7my+Fyre
sjV5nWGDSMZODEsGxHOC+JgNAC1z3n14/InFNOsHICnA5AnJzQdSQQjvcZHN2NyW
+t4f
-----END CERTIFICATE-----
issuing_ca -----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUJqrw/9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx
MjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAKY/vJ6sRFym+yFYUneoVtDmOCaDKAQiGzQw0IXL
wgMBBb82iKpYj5aQjXZGIl+VkVnCi+M2AQ/iYXWZf1kTAdle4A6OC4+VefSIa2b4
eB7R8aiGTce62jB95+s5/YgrfIqk6igfpCSXYLE8ubNDA2/+cqvjhku1UzlvKBX2
hIlgWkKlrsnybHN+B/3Usw9Km/87rzoDR3OMxLV55YPHiq6+olIfSSwKAPjH8LZm
uM1ITLG3WQUl8ARF17Dj+wOKqbUG38PduVwKL5+qPksrvNwlmCP7Kmjncc6xnYp6
5lfr7V4DC/UezrJYCIb0g/SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM
kxRZA2wR4f/yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f/yNhQUMBYG
A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn
7mvD2+sr6lx4DW/vJwVSW8eTuLtOLNu6/aFhcgTY/OOB8q4n6iHuLrEt8/RV7RJI
obRx74SfK9BcOLt4+DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF+ZphN
nNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI/CJoBb+P5Ahk6krc
LZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C+/1CDc9YL
zjq+8nI2ooIrj4ZKZCOm2fKd1KeGN/CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl/1V5
BT55jevSPVVu
-----END CERTIFICATE-----
private_key -----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA1CU93lVgcLXGPxRGTRT3GM5wqytCo7Z6gjfoHyKoPCAqjRdj
sYgp1FMvumNQKjUat5KTtr2fypbOnAURDCh4bN/omcj7eAqtldJ8mf8CtKUaaJ1k
p3R6RRFY/u96BnmKUG8G7oDeEDsKlXuEuRcNbGlGF8DaM/O1HFa57cM/8yFB26Nj
5wBoG5Om6ee5+W+14Qee8AB6OJbsf883Z+zvhJTaB0QM4ZUquAMoMVEutWhdI5EF
m5OjtMeMu2U+iJl2XqqgQ/JmLRjRdMn1qd9TzTaVSnjoZ97sjHK444Px1m45einL
qKUJ+Ia2ljXYkkItJj9Ut6ZSAP9fHlAtX84W3QIDAQABAoIBAQCf5YIANfF+gkNt
/+YM6yRi+hZJrU2I/1zPETxPW1vaFZR8y4hEoxCEDD8JCRm+9k+w1TWoorvxgkEv
r1HuDALYbNtwLd/71nCHYCKyH1b2uQpyl07qOAyASlb9r5oVjz4E6eobkd3N9fJA
QN0EdK+VarN968mLJsD3Hxb8chGdObBCQ+LO+zdqQLaz+JwhfnK98rm6huQtYK3w
ccd0OwoVmtZz2eJl11TJkB9fi4WqJyxl4wST7QC80LstB1deR78oDmN5WUKU12+G
4Mrgc1hRwUSm18HTTgAhaA4A3rjPyirBohb5Sf+jJxusnnay7tvWeMnIiRI9mqCE
dr3tLrcxAoGBAPL+jHVUF6sxBqm6RTe8Ewg/8RrGmd69oB71QlVUrLYyC96E2s56
19dcyt5U2z+F0u9wlwR1rMb2BJIXbxlNk+i87IHmpOjCMS38SPZYWLHKj02eGfvA
MjKKqEjNY/md9eVAVZIWSEy63c4UcBK1qUH3/5PNlyjk53gCOI/4OXX/AoGBAN+A
Alyd6A/pyHWq8WMyAlV18LnzX8XktJ07xrNmjbPGD5sEHp+Q9V33NitOZpu3bQL+
gCNmcrodrbr9LBV83bkAOVJrf82SPaBesV+ATY7ZiWpqvHTmcoS7nglM2XTr+uWR
Y9JGdpCE9U5QwTc6qfcn7Eqj7yNvvHMrT+1SHwsjAoGBALQyQEbhzYuOF7rV/26N
ci+z+0A39vNO++b5Se+tk0apZlPlgb2NK3LxxR+LHevFed9GRzdvbGk/F7Se3CyP
cxgswdazC6fwGjhX1mOYsG1oIU0V6X7f0FnaqWETrwf1M9yGEO78xzDfgozIazP0
s0fQeR9KXsZcuaotO3TIRxRRAoGAMFIDsLRvDKm1rkL0B0czm/hwwDMu/KDyr5/R
2M2OS1TB4PjmCgeUFOmyq3A63OWuStxtJboribOK8Qd1dXvWj/3NZtVY/z/j1P1E
Ceq6We0MOZa0Ae4kyi+p/kbAKPgv+VwSoc6cKailRHZPH7quLoJSIt0IgbfRnXC6
ygtcLNMCgYBwiPw2mTYvXDrAcO17NhK/r7IL7BEdFdx/w8vNJQp+Ub4OO3Iw6ARI
vXxu6A+Qp50jra3UUtnI+hIirMS+XEeWqJghK1js3ZR6wA/ZkYZw5X1RYuPexb/4
6befxmnEuGSbsgvGqYYTf5Z0vgsw4tAHfNS7TqSulYH06CjeG1F8DQ==
-----END RSA PRIVATE KEY-----
private_key_type rsa
serial_number 59:0b:af:a4:ca:40:db:29:b7:e8:4a:22:63:27:f7:3a:ce:54:78:8a
```
Vault has now generated a new set of credentials using the `example-dot-com`
role configuration. Here we see the dynamically generated private key and
certificate.
Using ACLs, it is possible to restrict using the pki backend such that trusted
operators can manage the role definitions, and both users and applications are
restricted in the credentials they are allowed to read.
If you get stuck at any time, simply run `vault path-help pki` or with a
subpath for interactive help output.
## Tutorial
Refer to the [Build Your Own Certificate Authority (CA)](/vault/tutorials/secrets-management/pki-engine)
guide for a step-by-step tutorial.
Have a look at the [PKI Secrets Engine with Managed Keys](/vault/tutorials/enterprise/managed-key-pki)
for more about how to use externally managed keys with PKI.
## API
The PKI secrets engine has a full HTTP API. Please see the
[PKI secrets engine API](/vault/api-docs/secret/pki) for more
details. | vault | layout docs page title PKI Secrets Engines Quick Start Root CA Setup description The PKI secrets engine for Vault generates TLS certificates PKI secrets engine quick start root CA setup This document provides a brief overview of setting up a Vault PKI Secrets Engine with a Root CA certificate Mount the backend The first step to using the PKI backend is to mount it Unlike the kv backend the pki backend is not mounted by default shell session vault secrets enable pki Successfully mounted pki at pki Configure a CA certificate Next Vault must be configured with a CA certificate and associated private key We ll take advantage of the backend s self signed root generation support but Vault also supports generating an intermediate CA with a CSR for signing or setting a PEM encoded certificate and private key bundle directly into the backend Generally you ll want a root certificate to only be used to sign CA intermediate certificates but for this example we ll proceed as if you will issue certificates directly from the root As it s a root we ll want to set a long maximum life time for the certificate since it honors the maximum mount TTL first we adjust that shell session vault secrets tune max lease ttl 87600h pki Successfully tuned mount pki That sets the maximum TTL for secrets issued from the mount to 10 years Note that roles can further restrict the maximum TTL Now we generate our root certificate shell session vault write pki root generate internal common name myvault com ttl 87600h Key Value certificate BEGIN CERTIFICATE MIIDNTCCAh2gAwIBAgIUJqrw 9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx MjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN AQEBBQADggEPADCCAQoCggEBAKY vJ6sRFym yFYUneoVtDmOCaDKAQiGzQw0IXL wgMBBb82iKpYj5aQjXZGIl VkVnCi M2AQ iYXWZf1kTAdle4A6OC4 VefSIa2b4 eB7R8aiGTce62jB95 s5 YgrfIqk6igfpCSXYLE8ubNDA2 cqvjhku1UzlvKBX2 hIlgWkKlrsnybHN B 3Usw9Km 87rzoDR3OMxLV55YPHiq6 olIfSSwKAPjH8LZm uM1ITLG3WQUl8ARF17Dj wOKqbUG38PduVwKL5 qPksrvNwlmCP7Kmjncc6xnYp6 5lfr7V4DC UezrJYCIb0g SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD VR0PAQH BAQDAgEGMA8GA1UdEwEB wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM kxRZA2wR4f yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f yNhQUMBYG A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn 7mvD2 sr6lx4DW vJwVSW8eTuLtOLNu6 aFhcgTY OOB8q4n6iHuLrEt8 RV7RJI obRx74SfK9BcOLt4 DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF ZphN nNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI CJoBb P5Ahk6krc LZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C 1CDc9YL zjq 8nI2ooIrj4ZKZCOm2fKd1KeGN CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl 1V5 BT55jevSPVVu END CERTIFICATE expiration 1828121029 issuing ca BEGIN CERTIFICATE MIIDNTCCAh2gAwIBAgIUJqrw 9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx MjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN AQEBBQADggEPADCCAQoCggEBAKY vJ6sRFym yFYUneoVtDmOCaDKAQiGzQw0IXL wgMBBb82iKpYj5aQjXZGIl VkVnCi M2AQ iYXWZf1kTAdle4A6OC4 VefSIa2b4 eB7R8aiGTce62jB95 s5 YgrfIqk6igfpCSXYLE8ubNDA2 cqvjhku1UzlvKBX2 hIlgWkKlrsnybHN B 3Usw9Km 87rzoDR3OMxLV55YPHiq6 olIfSSwKAPjH8LZm uM1ITLG3WQUl8ARF17Dj wOKqbUG38PduVwKL5 qPksrvNwlmCP7Kmjncc6xnYp6 5lfr7V4DC UezrJYCIb0g SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD VR0PAQH BAQDAgEGMA8GA1UdEwEB wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM kxRZA2wR4f yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f yNhQUMBYG A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn 7mvD2 sr6lx4DW vJwVSW8eTuLtOLNu6 aFhcgTY OOB8q4n6iHuLrEt8 RV7RJI obRx74SfK9BcOLt4 DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF ZphN nNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI CJoBb P5Ahk6krc LZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C 1CDc9YL zjq 8nI2ooIrj4ZKZCOm2fKd1KeGN CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl 1V5 BT55jevSPVVu END CERTIFICATE serial number 26 aa f0 ff d1 03 65 ba 78 0c 4c 5a 2e 38 74 bd 20 07 c8 18 The returned certificate is purely informational it and its private key are safely stored in the backend mount Set URL configuration Generated certificates can have the CRL location and the location of the issuing certificate encoded These values must be set manually and typically to FQDN associated to the Vault server but can be changed at any time shell session vault write pki config urls issuing certificates http vault example com 8200 v1 pki ca crl distribution points http vault example com 8200 v1 pki crl Success Data written to pki ca urls Configure a role The next step is to configure a role A role is a logical name that maps to a policy used to generate those credentials For example let s create an example dot com role shell session vault write pki roles example dot com allowed domains example com allow subdomains true max ttl 72h Success Data written to pki roles example dot com Issue certificates By writing to the roles example dot com path we are defining the example dot com role To generate a new certificate we simply write to the issue endpoint with that role name Vault is now configured to create and manage certificates shell session vault write pki issue example dot com common name blah example com Key Value certificate BEGIN CERTIFICATE MIIDvzCCAqegAwIBAgIUWQuvpMpA2ym36EoiYyf3Os5UeIowDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyNDA1WhcNMTcx MjExMTkyNDM1WjAbMRkwFwYDVQQDExBibGFoLmV4YW1wbGUuY29tMIIBIjANBgkq hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1CU93lVgcLXGPxRGTRT3GM5wqytCo7Z6 gjfoHyKoPCAqjRdjsYgp1FMvumNQKjUat5KTtr2fypbOnAURDCh4bN omcj7eAqt ldJ8mf8CtKUaaJ1kp3R6RRFY u96BnmKUG8G7oDeEDsKlXuEuRcNbGlGF8DaM O1 HFa57cM 8yFB26Nj5wBoG5Om6ee5 W 14Qee8AB6OJbsf883Z zvhJTaB0QM4ZUq uAMoMVEutWhdI5EFm5OjtMeMu2U iJl2XqqgQ JmLRjRdMn1qd9TzTaVSnjoZ97s jHK444Px1m45einLqKUJ Ia2ljXYkkItJj9Ut6ZSAP9fHlAtX84W3QIDAQABo4H MIH8MA4GA1UdDwEB wQEAwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUH AwIwHQYDVR0OBBYEFH YdObW6T94U0zuU5hBfTfU5pt1MB8GA1UdIwQYMBaAFECK dYM4gDbMkxRZA2wR4f yNhQUMDsGCCsGAQUFBwEBBC8wLTArBggrBgEFBQcwAoYf aHR0cDovLzEyNy4wLjAuMTo4MjAwL3YxL3BraS9jYTAbBgNVHREEFDASghBibGFo LmV4YW1wbGUuY29tMDEGA1UdHwQqMCgwJqAkoCKGIGh0dHA6Ly8xMjcuMC4wLjE6 ODIwMC92MS9wa2kvY3JsMA0GCSqGSIb3DQEBCwUAA4IBAQCDXbHV68VayweB2tkb KDdCaveaTULjCeJUnm9UT 6C0YqC RxTAjdKFrilK49elOA3rAtEL6dmsDP2yH25 ptqi2iU y99HhZgu0zkS p8elYN3 l 0O7pOxayYXBkFf5t0TlEWSTb7cW Etz c MvSqx6vVvspSjB0PsA3eBq0caZnUJv2u TEiUe7PPY0UmrZxp R P kE54yI3nWN 4Cwto6yUwScOPbVR1d3hE2KU2toiVkEoOk17UyXWTokbG8rG0KLj99zu7my Fyre sjV5nWGDSMZODEsGxHOC JgNAC1z3n14 InFNOsHICnA5AnJzQdSQQjvcZHN2NyW t4f END CERTIFICATE issuing ca BEGIN CERTIFICATE MIIDNTCCAh2gAwIBAgIUJqrw 9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx MjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN AQEBBQADggEPADCCAQoCggEBAKY vJ6sRFym yFYUneoVtDmOCaDKAQiGzQw0IXL wgMBBb82iKpYj5aQjXZGIl VkVnCi M2AQ iYXWZf1kTAdle4A6OC4 VefSIa2b4 eB7R8aiGTce62jB95 s5 YgrfIqk6igfpCSXYLE8ubNDA2 cqvjhku1UzlvKBX2 hIlgWkKlrsnybHN B 3Usw9Km 87rzoDR3OMxLV55YPHiq6 olIfSSwKAPjH8LZm uM1ITLG3WQUl8ARF17Dj wOKqbUG38PduVwKL5 qPksrvNwlmCP7Kmjncc6xnYp6 5lfr7V4DC UezrJYCIb0g SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD VR0PAQH BAQDAgEGMA8GA1UdEwEB wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM kxRZA2wR4f yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f yNhQUMBYG A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn 7mvD2 sr6lx4DW vJwVSW8eTuLtOLNu6 aFhcgTY OOB8q4n6iHuLrEt8 RV7RJI obRx74SfK9BcOLt4 DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF ZphN nNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI CJoBb P5Ahk6krc LZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C 1CDc9YL zjq 8nI2ooIrj4ZKZCOm2fKd1KeGN CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl 1V5 BT55jevSPVVu END CERTIFICATE private key BEGIN RSA PRIVATE KEY MIIEpAIBAAKCAQEA1CU93lVgcLXGPxRGTRT3GM5wqytCo7Z6gjfoHyKoPCAqjRdj sYgp1FMvumNQKjUat5KTtr2fypbOnAURDCh4bN omcj7eAqtldJ8mf8CtKUaaJ1k p3R6RRFY u96BnmKUG8G7oDeEDsKlXuEuRcNbGlGF8DaM O1HFa57cM 8yFB26Nj 5wBoG5Om6ee5 W 14Qee8AB6OJbsf883Z zvhJTaB0QM4ZUquAMoMVEutWhdI5EF m5OjtMeMu2U iJl2XqqgQ JmLRjRdMn1qd9TzTaVSnjoZ97sjHK444Px1m45einL qKUJ Ia2ljXYkkItJj9Ut6ZSAP9fHlAtX84W3QIDAQABAoIBAQCf5YIANfF gkNt YM6yRi hZJrU2I 1zPETxPW1vaFZR8y4hEoxCEDD8JCRm 9k w1TWoorvxgkEv r1HuDALYbNtwLd 71nCHYCKyH1b2uQpyl07qOAyASlb9r5oVjz4E6eobkd3N9fJA QN0EdK VarN968mLJsD3Hxb8chGdObBCQ LO zdqQLaz JwhfnK98rm6huQtYK3w ccd0OwoVmtZz2eJl11TJkB9fi4WqJyxl4wST7QC80LstB1deR78oDmN5WUKU12 G 4Mrgc1hRwUSm18HTTgAhaA4A3rjPyirBohb5Sf jJxusnnay7tvWeMnIiRI9mqCE dr3tLrcxAoGBAPL jHVUF6sxBqm6RTe8Ewg 8RrGmd69oB71QlVUrLYyC96E2s56 19dcyt5U2z F0u9wlwR1rMb2BJIXbxlNk i87IHmpOjCMS38SPZYWLHKj02eGfvA MjKKqEjNY md9eVAVZIWSEy63c4UcBK1qUH3 5PNlyjk53gCOI 4OXX AoGBAN A Alyd6A pyHWq8WMyAlV18LnzX8XktJ07xrNmjbPGD5sEHp Q9V33NitOZpu3bQL gCNmcrodrbr9LBV83bkAOVJrf82SPaBesV ATY7ZiWpqvHTmcoS7nglM2XTr uWR Y9JGdpCE9U5QwTc6qfcn7Eqj7yNvvHMrT 1SHwsjAoGBALQyQEbhzYuOF7rV 26N ci z 0A39vNO b5Se tk0apZlPlgb2NK3LxxR LHevFed9GRzdvbGk F7Se3CyP cxgswdazC6fwGjhX1mOYsG1oIU0V6X7f0FnaqWETrwf1M9yGEO78xzDfgozIazP0 s0fQeR9KXsZcuaotO3TIRxRRAoGAMFIDsLRvDKm1rkL0B0czm hwwDMu KDyr5 R 2M2OS1TB4PjmCgeUFOmyq3A63OWuStxtJboribOK8Qd1dXvWj 3NZtVY z j1P1E Ceq6We0MOZa0Ae4kyi p kbAKPgv VwSoc6cKailRHZPH7quLoJSIt0IgbfRnXC6 ygtcLNMCgYBwiPw2mTYvXDrAcO17NhK r7IL7BEdFdx w8vNJQp Ub4OO3Iw6ARI vXxu6A Qp50jra3UUtnI hIirMS XEeWqJghK1js3ZR6wA ZkYZw5X1RYuPexb 4 6befxmnEuGSbsgvGqYYTf5Z0vgsw4tAHfNS7TqSulYH06CjeG1F8DQ END RSA PRIVATE KEY private key type rsa serial number 59 0b af a4 ca 40 db 29 b7 e8 4a 22 63 27 f7 3a ce 54 78 8a Vault has now generated a new set of credentials using the example dot com role configuration Here we see the dynamically generated private key and certificate Using ACLs it is possible to restrict using the pki backend such that trusted operators can manage the role definitions and both users and applications are restricted in the credentials they are allowed to read If you get stuck at any time simply run vault path help pki or with a subpath for interactive help output Tutorial Refer to the Build Your Own Certificate Authority CA vault tutorials secrets management pki engine guide for a step by step tutorial Have a look at the PKI Secrets Engine with Managed Keys vault tutorials enterprise managed key pki for more about how to use externally managed keys with PKI API The PKI secrets engine has a full HTTP API Please see the PKI secrets engine API vault api docs secret pki for more details |
vault PKI secrets engine quick start intermediate CA setup The PKI secrets engine for Vault generates TLS certificates certificates were issued directly from the root certificate authority layout docs In the first Quick Start guide vault docs secrets pki quick start root ca page title PKI Secrets Engines Quick Start Intermediate CA Setup | ---
layout: docs
page_title: 'PKI - Secrets Engines: Quick Start: Intermediate CA Setup'
description: The PKI secrets engine for Vault generates TLS certificates.
---
# PKI secrets engine - quick start - intermediate CA setup
In the [first Quick Start guide](/vault/docs/secrets/pki/quick-start-root-ca),
certificates were issued directly from the root certificate authority.
As described in the example, this is not a recommended practice. This guide
builds on the previous guide's root certificate authority and creates an
intermediate authority using the root authority to sign the intermediate's
certificate.
#### Mount the backend
To add another certificate authority to our Vault instance, we have to mount it
at a different path.
```shell-session
$ vault secrets enable -path=pki_int pki
Successfully mounted 'pki' at 'pki_int'!
```
#### Configure an intermediate CA
```shell-session
$ vault secrets tune -max-lease-ttl=43800h pki_int
Successfully tuned mount 'pki_int'!
```
That sets the maximum TTL for secrets issued from the mount to 5 years. This
value should be less than or equal to the root certificate authority.
Now, we generate our intermediate certificate signing request:
```shell-session
$ vault write pki_int/intermediate/generate/internal common_name="myvault.com Intermediate Authority" ttl=43800h
Key Value
csr -----BEGIN CERTIFICATE REQUEST-----
MIICsjCCAZoCAQAwLTErMCkGA1UEAxMibXl2YXVsdC5jb20gSW50ZXJtZWRpYXRl
IEF1dGhvcml0eTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJU1Qh8l
BW16WHAu34Fy92FnSy4219WVlKw1xwpKxjd95xH6WcxXozOs6oHFQ9c592bz51F8
KK3FFJYraUrGONI5Cz9qHbzC1mFCmjnXVXCoeNKIzEBG0Y+ehH7MQ1SvDCyvaJPX
ItFXaGf6zENiGsApw3Y3lFr0MjPzZDBH1p4Nq3aA6L2BaxvO5vczdQl5tE2ud/zs
GIdCWnl1ThDEeiX1Ppduos/dx3gaZa9ly3iCuDMKIL9yK5XTBTgKB6ALPApekLQB
kcUFbOuMzjrDSBe9ytu65yICYp26iAPPA8aKTj5cUgscgzEvQS66rSAVG/unrWxb
wbl8b7eQztCmp60CAwEAAaBAMD4GCSqGSIb3DQEJDjExMC8wLQYDVR0RBCYwJIIi
bXl2YXVsdC5jb20gSW50ZXJtZWRpYXRlIEF1dGhvcml0eTANBgkqhkiG9w0BAQsF
AAOCAQEAZA9A1QvTdAd45+Ay55FmKNWnis1zLjbmWNJURUoDei6i6SCJg0YGX1cZ
WkD0ibxPYihSsKRaIUwC2bE8cxZM57OSs7ISUmyPQAT2IHTHvuGK72qlFRBlFOzg
SHEG7gfyKdrALphyF8wM3u4gXhcnY3CdltjabL3YakZqd3Ey4870/0XXeo5c4k7w
/+n9M4xED4TnXYCGfLAlu5WWKSeCvu9mHXnJcLo1MiYjX7KGey/xYYbfxHSPm4ul
tI6Vf59zDRscfNmq37fERD3TiKP0QZNGTSRvnrxrx2RUQGXFywM8l4doG8nS5BxU
2jP20cdv0lJFvHr9663/8B/+F5L6Yw==
-----END CERTIFICATE REQUEST-----
```
Take the signing request from the intermediate authority and sign it using
another certificate authority, in this case the root certificate authority
generated in the first example.
```shell-session
$ vault write pki/root/sign-intermediate csr=@pki_int.csr format=pem_bundle ttl=43800h
Key Value
certificate -----BEGIN CERTIFICATE-----
MIIDZTCCAk2gAwIBAgIUENxQD7KIJi1zE/jEiYqAG1VC4NwwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMTI4MTcwNzIzWhcNMjIx
MTI3MTcwNzUzWjAtMSswKQYDVQQDEyJteXZhdWx0LmNvbSBJbnRlcm1lZGlhdGUg
QXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5seNV4Yd
uCMX0POUUuSzCBiR3Cyf9b9tGsCX7UfvZmjPs+Fl/X+Ovq6UtHM9RuTGlyfFrCWy
pflO7mc0H8PBzlvhv1WQet5aRyUOXkG6iYmooG9iobIY8z/TZCaCF605pgygfOaS
DIlwOdJkfiXxGpQ00pfIwe/Y2OK2I5e36u0E2EA6kXvcfexLjQGFPbod+H0R29Ro
/GwOJ6MpSHqB77mF025x1y08EtqT1z1kFCiDzFSkzNZEZYWljhDS6ZRY9ctzKufm
5CkUwmvCVRI2CivDJvmfhXyv0DRoq4IhYdJHo179RSObq3BY9f9LQ0balNLiM0Ft
O8f0urTqUAbySwIDAQABo4GTMIGQMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8E
BTADAQH/MB0GA1UdDgQWBBSQgTfcMrKYzyckP6t/0iVQkl0ZBDAfBgNVHSMEGDAW
gBRccsCARqs3wQDjW7JMNXS6pWlFSDAtBgNVHREEJjAkgiJteXZhdWx0LmNvbSBJ
bnRlcm1lZGlhdGUgQXV0aG9yaXR5MA0GCSqGSIb3DQEBCwUAA4IBAQABNg2HxccY
DwRpsJ+sxA0BgDyF+tYtOlXViVNv6Z+nOU0nNhQSCjfzjYWmBg25nfKaFhQSC3b7
fIW+e7it/FLVrCgaqdysoxljqhR0gXMAy8S/ubmskPWjJiKauJB5bfB59Uf2GP6j
zimZDu6WjWvvgkKcJqJEbOOS9DWBvCTdmmml1NMXZtcytpod2Y7mxninqNRx3qpx
Pst4vgAbyM/3zLSzkyUD+MXIyRXwxktFlyEYBHvMd9OoHzLO6WLxk22FyQQ+w4by
NfXJY4r5pj6a4lJ6pPuqyfBhidYMTdY3AI7w/QRGk4qQv1iDmnZspk2AxdbR5Lwe
YmChIML/f++S
-----END CERTIFICATE-----
expiration 1669568873
issuing_ca -----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUdR44qhhyh3CZjnCtflGKQlTI8NswDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMTI4MTYxODA2WhcNMjcx
MTI2MTYxODM1WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBANTPnQ2CUkuLrYT4V6/IIK/gWFZXFG4lWTmgM5Zh
PDquMhLEikZCbZKbupouBI8MOr5i8tycENaTnSs9dBwVEOWAHbLkliVgvCKgLi0F
PfPM87FnBoKVctO2ip8AdmYcAt/wc096dWBG6eKLVP5xsAe7NcYDtF/inHgEZ22q
ZjGVEyC6WntIASgULoHGgHakPp1AHLhGm8nL5YbusWY7RgZIlNeGWLVoneG0pxdV
7W1SPO67dsQyq58mTxMIGVUj5YE1q7/C6OhCTnAHc+sRm0oUehPfO8kY4NHpCJGv
nDRdJi6k6ewk94c0KK2tUUM/TN6ZSRfx6ccgfPH8zNcVPVcCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFFxywIBGqzfB
AONbskw1dLqlaUVIMB8GA1UdIwQYMBaAFFxywIBGqzfBAONbskw1dLqlaUVIMBYG
A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQBgvsgpBuVR
iKVdXXpFyoQLImuoaHZgaj5tuUDqnMoxOA1XWW6SVlZmGfDQ7+u5NBkp2cGSDRGm
ARHJTeURvdZIwdFdGkNqfAZjutRjjQOnXgS65ujZd7AnlZq1v0ZOZqVVk9YEOhOe
Rh2MjnHGNuiLBib1YNQHNuRef1mPwIE2Gm/Tz/z3JPHtkKNIKbn60zHrIIM/OT2Z
HYjcMUcqXtKGYfNjVspJm3lSDUoyJdaq80Afmy2Ez1Vt9crGG3Dj8mgs59lEhEyo
MDVhOP116M5HJfQlRPVd29qS8pFrjBvXKjJSnJNG1UFdrWBJRJ3QrBxUQALKrJlR
g5lvTeymHjS/
-----END CERTIFICATE-----
serial_number 10:dc:50:0f:b2:88:26:2d:73:13:f8:c4:89:8a:80:1b:55:42:e0:dc
```
Now set the intermediate certificate authorities signing certificate to the
root-signed certificate.
```shell-session
$ vault write pki_int/intermediate/set-signed certificate=@signed_certificate.pem
Success! Data written to: pki_int/intermediate/set-signed
```
The intermediate certificate authority is now configured and ready to issue
certificates.
#### Set URL configuration
Generated certificates can have the CRL location and the location of the
issuing certificate encoded. These values must be set manually, but can be
changed at any time.
```shell-session
$ vault write pki_int/config/urls issuing_certificates="http://127.0.0.1:8200/v1/pki_int/ca" crl_distribution_points="http://127.0.0.1:8200/v1/pki_int/crl"
Success! Data written to: pki_int/ca/urls
```
#### Configure a role
The next step is to configure a role. A role is a logical name that maps to a
policy used to generate those credentials. For example, let's create an
"example-dot-com" role:
```shell-session
$ vault write pki_int/roles/example-dot-com \
allowed_domains=example.com \
allow_subdomains=true max_ttl=72h
Success! Data written to: pki_int/roles/example-dot-com
```
#### Issue certificates
By writing to the `roles/example-dot-com` path we are defining the
`example-dot-com` role. To generate a new certificate, we simply write
to the `issue` endpoint with that role name: Vault is now configured to create
and manage certificates!
```shell-session
$ vault write pki_int/issue/example-dot-com \
common_name=blah.example.com
Key Value
--- -----
certificate -----BEGIN CERTIFICATE-----
MIIDbDCCAlSgAwIBAgIUPiAyxq+nIE6xlWf7hrzLkPQxtvMwDQYJKoZIhvcNAQEL
BQAwMzExMC8GA1UEAxMoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1
dGhvcml0eTAeFw0xNjA5MjcwMDA5MTNaFw0xNjA5MjcwMTA5NDNaMBsxGTAXBgNV
BAMTEGJsYWguZXhhbXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQDJAYB04IVdmSC/TimaA6BbXlvgBTZHL5wBUTmO4iHhenL0eDEXVe2Fd7Yq
75LiBJmcC96hKbqh5rwS8KwN9ElZI52/mSMC+IvoNlYHAf7shwfsjrVx3q7/bTFg
lz6wECn1ugysxynmMvgQD/pliRkxTQ7RMh4Qlh75YG3R9BHy9ZddklZp0aNaitts
0uufHnN1UER/wxBCZdWTUu34KDL9I6yE7Br0slKKHPdEsGlFcMkbZhvjslZ7DGvO
974S0qtOdKiawJZbpNPg0foGZ3AxesDUlkHmmgzUNes/sjknDYTHEfeXM6Uap0j6
XvyhCxqdeahb/Vtibg0z9I0IusJbAgMBAAGjgY8wgYwwDgYDVR0PAQH/BAQDAgOo
MB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAdBgNVHQ4EFgQU/5oy0rL7
TT0wX7KZK7qcXqgayNwwHwYDVR0jBBgwFoAUgM37P8oXmA972ztLfw+b1eIY5now
GwYDVR0RBBQwEoIQYmxhaC5leGFtcGxlLmNvbTANBgkqhkiG9w0BAQsFAAOCAQEA
CT2vI6/taeLTw6ZulUhLXEXYXWZu1gF8n2COjZzbZXmHxQAoZ3GtnSNwacPHAyIj
f3cA9Moo552y39LUtWk+wgFtQokWGK7LXglLaveNUBowOHq/xk0waiIinJcgTG53
Z/qnbJnTjAOG7JwVJplWUIiS1avCksrHt7heE2EGRGJALqyLZ119+PW6ogtCLUv1
X8RCTw/UkIF/LT+sLF0bXWy4Hn38Gjwj1MVv1l76cEGOVSHyrYkN+6AMnAP58L5+
IWE9tN3oac4x7jhbuNpfxazIJ8Q6l/Up5U5Evfbh6N1DI0/gFCP20fMBkHwkuLfZ
2ekZoSeCgFRDlHGkr7Vv9w==
-----END CERTIFICATE-----
issuing_ca -----BEGIN CERTIFICATE-----
MIIDijCCAnKgAwIBAgIUB28DoGwgGFKL7fbOu9S4FalHLn0wDQYJKoZIhvcNAQEL
BQAwLzEtMCsGA1UEAxMkVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgQXV0aG9y
aXR5MB4XDTE2MDkyNzAwMDgyMVoXDTI2MDkxNjE2MDg1MVowMzExMC8GA1UEAxMo
VmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1dGhvcml0eTCCASIwDQYJ
KoZIhvcNAQEBBQADggEPADCCAQoCggEBAOSCiSij4wy1wiMwvZt+rtU3IaO6ZTn9
LfIPuGsR5/QSJk37pCZQco1LgoE/rTl+/xu3bDovyHDmgObghC6rzVOX2Tpi7kD+
DOZpqxOsaS8ebYgxB/XJTSxyEJuSAcpSNLqqAiZivuQXdaD0N7H3Or0awwmKE9mD
I0g8CF4fPDmuuOG0ASn9fMqXVVt5tXtEqZ9yJYfNOXx3FOPjRVOZf+kvSc31wCKe
i/KmR0AQOmToKMzq988nLqFPTi9KZB8sEU20cGFeTQFol+m3FTcIru94EPD+nLUn
xtlLELVspYb/PP3VpvRj9b+DY8FGJ5nfSJl7Rkje+CD4VxJpSadin3kCAwEAAaOB
mTCBljAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU
gM37P8oXmA972ztLfw+b1eIY5nowHwYDVR0jBBgwFoAUj4YAIxRwrBy0QMRKLnD0
kVidIuYwMwYDVR0RBCwwKoIoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3Vi
IEF1dGhvcml0eTANBgkqhkiG9w0BAQsFAAOCAQEAA4buJuPNJvA1kiATLw1dVU2J
HPubk2Kp26Mg+GwLn7Vz45Ub133JCYfF3/zXLFZZ5Yub9gWTtjScrvNfQTAbNGdQ
BdnUlMmIRmfB7bfckhryR2R9byumeHATgNKZF7h8liNHI7X8tTzZGs6wPdXOLlzR
TlM3m1RNK8pbSPOkfPb06w9cBRlD8OAbNtJmuypXA6tYyiiMYBhP0QLAO3i4m1ns
aAjAgEjtkB1rQxW5DxoTArZ0asiIdmIcIGmsVxfDQIjFlRxAkafMs74v+5U5gbBX
wsOledU0fLl8KLq8W3OXqJwhGLK65fscrP0/omPAcFgzXf+L4VUADM4XhW6Xyg==
-----END CERTIFICATE-----
ca_chain [-----BEGIN CERTIFICATE-----
MIIDijCCAnKgAwIBAgIUB28DoGwgGFKL7fbOu9S4FalHLn0wDQYJKoZIhvcNAQEL
BQAwLzEtMCsGA1UEAxMkVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgQXV0aG9y
aXR5MB4XDTE2MDkyNzAwMDgyMVoXDTI2MDkxNjE2MDg1MVowMzExMC8GA1UEAxMo
VmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1dGhvcml0eTCCASIwDQYJ
KoZIhvcNAQEBBQADggEPADCCAQoCggEBAOSCiSij4wy1wiMwvZt+rtU3IaO6ZTn9
LfIPuGsR5/QSJk37pCZQco1LgoE/rTl+/xu3bDovyHDmgObghC6rzVOX2Tpi7kD+
DOZpqxOsaS8ebYgxB/XJTSxyEJuSAcpSNLqqAiZivuQXdaD0N7H3Or0awwmKE9mD
I0g8CF4fPDmuuOG0ASn9fMqXVVt5tXtEqZ9yJYfNOXx3FOPjRVOZf+kvSc31wCKe
i/KmR0AQOmToKMzq988nLqFPTi9KZB8sEU20cGFeTQFol+m3FTcIru94EPD+nLUn
xtlLELVspYb/PP3VpvRj9b+DY8FGJ5nfSJl7Rkje+CD4VxJpSadin3kCAwEAAaOB
mTCBljAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU
gM37P8oXmA972ztLfw+b1eIY5nowHwYDVR0jBBgwFoAUj4YAIxRwrBy0QMRKLnD0
kVidIuYwMwYDVR0RBCwwKoIoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3Vi
IEF1dGhvcml0eTANBgkqhkiG9w0BAQsFAAOCAQEAA4buJuPNJvA1kiATLw1dVU2J
HPubk2Kp26Mg+GwLn7Vz45Ub133JCYfF3/zXLFZZ5Yub9gWTtjScrvNfQTAbNGdQ
BdnUlMmIRmfB7bfckhryR2R9byumeHATgNKZF7h8liNHI7X8tTzZGs6wPdXOLlzR
TlM3m1RNK8pbSPOkfPb06w9cBRlD8OAbNtJmuypXA6tYyiiMYBhP0QLAO3i4m1ns
aAjAgEjtkB1rQxW5DxoTArZ0asiIdmIcIGmsVxfDQIjFlRxAkafMs74v+5U5gbBX
wsOledU0fLl8KLq8W3OXqJwhGLK65fscrP0/omPAcFgzXf+L4VUADM4XhW6Xyg==
-----END CERTIFICATE-----]
private_key -----BEGIN RSA PRIVATE KEY-----
MIIEpgIBAAKCAQEAyQGAdOCFXZkgv04pmgOgW15b4AU2Ry+cAVE5juIh4Xpy9Hgx
F1XthXe2Ku+S4gSZnAveoSm6oea8EvCsDfRJWSOdv5kjAviL6DZWBwH+7IcH7I61
cd6u/20xYJc+sBAp9boMrMcp5jL4EA/6ZYkZMU0O0TIeEJYe+WBt0fQR8vWXXZJW
adGjWorbbNLrnx5zdVBEf8MQQmXVk1Lt+Cgy/SOshOwa9LJSihz3RLBpRXDJG2Yb
47JWewxrzve+EtKrTnSomsCWW6TT4NH6BmdwMXrA1JZB5poM1DXrP7I5Jw2ExxH3
lzOlGqdI+l78oQsanXmoW/1bYm4NM/SNCLrCWwIDAQABAoIBAQCCbHMJY1Wl8eIJ
v5HG2WuHXaaHqVoavo2fXTDXwWryfx1v+zz/Q0YnQBH3shPAi/OQCTOfpw/uVWTb
dUZul3+wUyfcVmUdXGCLgBY53dWna8Z8e+zHwhISsqtDXV/TpelUBDCNO324XIIR
Cg0TLO4nyzQ+ESLo6D+Y2DTp8lBjMEkmKTd8CLXR2ycEoVykN98qPZm8keiLGO91
I8K7aRd8uOyQ6HUfJRlzFHSuwaLReErxGTEPI4t/wVqh2nP2gGBsn3apiJ0ul6Jz
NlYO5PqiwpeDk4ibhQBpicnm1jnEcynH/WtGuKgMNB0M4SBRBsEguO7WoKx3o+qZ
iVIaPWDhAoGBAO05UBvyJpAcz/ZNQlaF0EAOhoxNQ3h6+6ZYUE52PgZ/DHftyJPI
Y+JJNclY91wn91Yk3ROrDi8gqhzA+2Lelxo1kuZDu+m+bpzhVUdJia7tZDNzRIhI
24eP2GdochooOZ0qjvrik4kuX43amBhQ4RHsBjmX5CnUlL5ZULs8v2xnAoGBANjq
VLAwiIIqJZEC6BuBvVYKaRWkBCAXvQ3j/OqxHRYu3P68PZ58Q7HrhrCuyQHTph2v
fzfmEMPbSCrFIrrMRmjUG8wopL7GjZjFl8HOBHFwzFiz+CT5DEC+IJIRkp4HM8F/
PAzjB2wCdRdSjLTD5ph0/xQIg5xfln7D+wqU0QHtAoGBAKkLF0/ivaIiNftw0J3x
WxXag/yErlizYpIGCqvuzII6lLr9YdoViT/eJYrmb9Zm0HS9biCu2zuwDijRSBIL
RieyF40opUaKoi3+0JMtDwTtO2MCd8qaCH3QfkgqAG0tTuj1Q8/6F2JA/myKYamq
MMhhpYny9+7rAlemM8ZJIqtvAoGBAKOI3zpKDNCdd98A4v7B7H2usZUIJ7gOTZDo
XqiNyRENWb2PK6GNq/e6SrxvuclvyKA+zFnXULJoYtsj7tAH69lieGaOCc5uoRgZ
eBU7/euMj/McE6vEO3GgJawaJYCQi3uJMjvA+bp7i81+hehOfU5ZfmmbFaZSBoMh
u+U5Vu3tAoGBANnBIbHfD3E7rqnqdpH1oRRHLA1VdghzEKgyUTPHNDzPJG87RY3c
rRqeXepblud3qFjD60xS9BzcBijOvZ4+KHk6VIMpkyqoeNVFCJbBVCw+JGMp88+v
e9t+2iwryh5+rnq+pg6anmgwHldptJc1XEFZA2UUQ89RP7kOGQF6IkIS
-----END RSA PRIVATE KEY-----
private_key_type rsa
serial_number 3e:20:32:c6:af:a7:20:4e:b1:95:67:fb:86:bc:cb:90:f4:31:b6:f3
```
Vault has now generated a new set of credentials using the `example-dot-com`
role configuration. Here we see the dynamically generated private key and
certificate. The issuing CA certificate and CA trust chain are returned as well.
The CA Chain returns all the intermediate authorities in the trust chain. The root
authority is not included since that will usually be trusted by the underlying
OS.
## Tutorial
Refer to the [Build Your Own Certificate Authority (CA)](/vault/tutorials/secrets-management/pki-engine)
guide for a step-by-step tutorial.
Have a look at the [PKI Secrets Engine with Managed Keys](/vault/tutorials/enterprise/managed-key-pki)
for more about how to use externally managed keys with PKI.
## API
The PKI secrets engine has a full HTTP API. Please see the
[PKI secrets engine API](/vault/api-docs/secret/pki) for more
details. | vault | layout docs page title PKI Secrets Engines Quick Start Intermediate CA Setup description The PKI secrets engine for Vault generates TLS certificates PKI secrets engine quick start intermediate CA setup In the first Quick Start guide vault docs secrets pki quick start root ca certificates were issued directly from the root certificate authority As described in the example this is not a recommended practice This guide builds on the previous guide s root certificate authority and creates an intermediate authority using the root authority to sign the intermediate s certificate Mount the backend To add another certificate authority to our Vault instance we have to mount it at a different path shell session vault secrets enable path pki int pki Successfully mounted pki at pki int Configure an intermediate CA shell session vault secrets tune max lease ttl 43800h pki int Successfully tuned mount pki int That sets the maximum TTL for secrets issued from the mount to 5 years This value should be less than or equal to the root certificate authority Now we generate our intermediate certificate signing request shell session vault write pki int intermediate generate internal common name myvault com Intermediate Authority ttl 43800h Key Value csr BEGIN CERTIFICATE REQUEST MIICsjCCAZoCAQAwLTErMCkGA1UEAxMibXl2YXVsdC5jb20gSW50ZXJtZWRpYXRl IEF1dGhvcml0eTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJU1Qh8l BW16WHAu34Fy92FnSy4219WVlKw1xwpKxjd95xH6WcxXozOs6oHFQ9c592bz51F8 KK3FFJYraUrGONI5Cz9qHbzC1mFCmjnXVXCoeNKIzEBG0Y ehH7MQ1SvDCyvaJPX ItFXaGf6zENiGsApw3Y3lFr0MjPzZDBH1p4Nq3aA6L2BaxvO5vczdQl5tE2ud zs GIdCWnl1ThDEeiX1Ppduos dx3gaZa9ly3iCuDMKIL9yK5XTBTgKB6ALPApekLQB kcUFbOuMzjrDSBe9ytu65yICYp26iAPPA8aKTj5cUgscgzEvQS66rSAVG unrWxb wbl8b7eQztCmp60CAwEAAaBAMD4GCSqGSIb3DQEJDjExMC8wLQYDVR0RBCYwJIIi bXl2YXVsdC5jb20gSW50ZXJtZWRpYXRlIEF1dGhvcml0eTANBgkqhkiG9w0BAQsF AAOCAQEAZA9A1QvTdAd45 Ay55FmKNWnis1zLjbmWNJURUoDei6i6SCJg0YGX1cZ WkD0ibxPYihSsKRaIUwC2bE8cxZM57OSs7ISUmyPQAT2IHTHvuGK72qlFRBlFOzg SHEG7gfyKdrALphyF8wM3u4gXhcnY3CdltjabL3YakZqd3Ey4870 0XXeo5c4k7w n9M4xED4TnXYCGfLAlu5WWKSeCvu9mHXnJcLo1MiYjX7KGey xYYbfxHSPm4ul tI6Vf59zDRscfNmq37fERD3TiKP0QZNGTSRvnrxrx2RUQGXFywM8l4doG8nS5BxU 2jP20cdv0lJFvHr9663 8B F5L6Yw END CERTIFICATE REQUEST Take the signing request from the intermediate authority and sign it using another certificate authority in this case the root certificate authority generated in the first example shell session vault write pki root sign intermediate csr pki int csr format pem bundle ttl 43800h Key Value certificate BEGIN CERTIFICATE MIIDZTCCAk2gAwIBAgIUENxQD7KIJi1zE jEiYqAG1VC4NwwDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMTI4MTcwNzIzWhcNMjIx MTI3MTcwNzUzWjAtMSswKQYDVQQDEyJteXZhdWx0LmNvbSBJbnRlcm1lZGlhdGUg QXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5seNV4Yd uCMX0POUUuSzCBiR3Cyf9b9tGsCX7UfvZmjPs Fl X Ovq6UtHM9RuTGlyfFrCWy pflO7mc0H8PBzlvhv1WQet5aRyUOXkG6iYmooG9iobIY8z TZCaCF605pgygfOaS DIlwOdJkfiXxGpQ00pfIwe Y2OK2I5e36u0E2EA6kXvcfexLjQGFPbod H0R29Ro GwOJ6MpSHqB77mF025x1y08EtqT1z1kFCiDzFSkzNZEZYWljhDS6ZRY9ctzKufm 5CkUwmvCVRI2CivDJvmfhXyv0DRoq4IhYdJHo179RSObq3BY9f9LQ0balNLiM0Ft O8f0urTqUAbySwIDAQABo4GTMIGQMA4GA1UdDwEB wQEAwIBBjAPBgNVHRMBAf8E BTADAQH MB0GA1UdDgQWBBSQgTfcMrKYzyckP6t 0iVQkl0ZBDAfBgNVHSMEGDAW gBRccsCARqs3wQDjW7JMNXS6pWlFSDAtBgNVHREEJjAkgiJteXZhdWx0LmNvbSBJ bnRlcm1lZGlhdGUgQXV0aG9yaXR5MA0GCSqGSIb3DQEBCwUAA4IBAQABNg2HxccY DwRpsJ sxA0BgDyF tYtOlXViVNv6Z nOU0nNhQSCjfzjYWmBg25nfKaFhQSC3b7 fIW e7it FLVrCgaqdysoxljqhR0gXMAy8S ubmskPWjJiKauJB5bfB59Uf2GP6j zimZDu6WjWvvgkKcJqJEbOOS9DWBvCTdmmml1NMXZtcytpod2Y7mxninqNRx3qpx Pst4vgAbyM 3zLSzkyUD MXIyRXwxktFlyEYBHvMd9OoHzLO6WLxk22FyQQ w4by NfXJY4r5pj6a4lJ6pPuqyfBhidYMTdY3AI7w QRGk4qQv1iDmnZspk2AxdbR5Lwe YmChIML f S END CERTIFICATE expiration 1669568873 issuing ca BEGIN CERTIFICATE MIIDNTCCAh2gAwIBAgIUdR44qhhyh3CZjnCtflGKQlTI8NswDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMTI4MTYxODA2WhcNMjcx MTI2MTYxODM1WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN AQEBBQADggEPADCCAQoCggEBANTPnQ2CUkuLrYT4V6 IIK gWFZXFG4lWTmgM5Zh PDquMhLEikZCbZKbupouBI8MOr5i8tycENaTnSs9dBwVEOWAHbLkliVgvCKgLi0F PfPM87FnBoKVctO2ip8AdmYcAt wc096dWBG6eKLVP5xsAe7NcYDtF inHgEZ22q ZjGVEyC6WntIASgULoHGgHakPp1AHLhGm8nL5YbusWY7RgZIlNeGWLVoneG0pxdV 7W1SPO67dsQyq58mTxMIGVUj5YE1q7 C6OhCTnAHc sRm0oUehPfO8kY4NHpCJGv nDRdJi6k6ewk94c0KK2tUUM TN6ZSRfx6ccgfPH8zNcVPVcCAwEAAaN7MHkwDgYD VR0PAQH BAQDAgEGMA8GA1UdEwEB wQFMAMBAf8wHQYDVR0OBBYEFFxywIBGqzfB AONbskw1dLqlaUVIMB8GA1UdIwQYMBaAFFxywIBGqzfBAONbskw1dLqlaUVIMBYG A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQBgvsgpBuVR iKVdXXpFyoQLImuoaHZgaj5tuUDqnMoxOA1XWW6SVlZmGfDQ7 u5NBkp2cGSDRGm ARHJTeURvdZIwdFdGkNqfAZjutRjjQOnXgS65ujZd7AnlZq1v0ZOZqVVk9YEOhOe Rh2MjnHGNuiLBib1YNQHNuRef1mPwIE2Gm Tz z3JPHtkKNIKbn60zHrIIM OT2Z HYjcMUcqXtKGYfNjVspJm3lSDUoyJdaq80Afmy2Ez1Vt9crGG3Dj8mgs59lEhEyo MDVhOP116M5HJfQlRPVd29qS8pFrjBvXKjJSnJNG1UFdrWBJRJ3QrBxUQALKrJlR g5lvTeymHjS END CERTIFICATE serial number 10 dc 50 0f b2 88 26 2d 73 13 f8 c4 89 8a 80 1b 55 42 e0 dc Now set the intermediate certificate authorities signing certificate to the root signed certificate shell session vault write pki int intermediate set signed certificate signed certificate pem Success Data written to pki int intermediate set signed The intermediate certificate authority is now configured and ready to issue certificates Set URL configuration Generated certificates can have the CRL location and the location of the issuing certificate encoded These values must be set manually but can be changed at any time shell session vault write pki int config urls issuing certificates http 127 0 0 1 8200 v1 pki int ca crl distribution points http 127 0 0 1 8200 v1 pki int crl Success Data written to pki int ca urls Configure a role The next step is to configure a role A role is a logical name that maps to a policy used to generate those credentials For example let s create an example dot com role shell session vault write pki int roles example dot com allowed domains example com allow subdomains true max ttl 72h Success Data written to pki int roles example dot com Issue certificates By writing to the roles example dot com path we are defining the example dot com role To generate a new certificate we simply write to the issue endpoint with that role name Vault is now configured to create and manage certificates shell session vault write pki int issue example dot com common name blah example com Key Value certificate BEGIN CERTIFICATE MIIDbDCCAlSgAwIBAgIUPiAyxq nIE6xlWf7hrzLkPQxtvMwDQYJKoZIhvcNAQEL BQAwMzExMC8GA1UEAxMoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1 dGhvcml0eTAeFw0xNjA5MjcwMDA5MTNaFw0xNjA5MjcwMTA5NDNaMBsxGTAXBgNV BAMTEGJsYWguZXhhbXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK AoIBAQDJAYB04IVdmSC TimaA6BbXlvgBTZHL5wBUTmO4iHhenL0eDEXVe2Fd7Yq 75LiBJmcC96hKbqh5rwS8KwN9ElZI52 mSMC IvoNlYHAf7shwfsjrVx3q7 bTFg lz6wECn1ugysxynmMvgQD pliRkxTQ7RMh4Qlh75YG3R9BHy9ZddklZp0aNaitts 0uufHnN1UER wxBCZdWTUu34KDL9I6yE7Br0slKKHPdEsGlFcMkbZhvjslZ7DGvO 974S0qtOdKiawJZbpNPg0foGZ3AxesDUlkHmmgzUNes sjknDYTHEfeXM6Uap0j6 XvyhCxqdeahb Vtibg0z9I0IusJbAgMBAAGjgY8wgYwwDgYDVR0PAQH BAQDAgOo MB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAdBgNVHQ4EFgQU 5oy0rL7 TT0wX7KZK7qcXqgayNwwHwYDVR0jBBgwFoAUgM37P8oXmA972ztLfw b1eIY5now GwYDVR0RBBQwEoIQYmxhaC5leGFtcGxlLmNvbTANBgkqhkiG9w0BAQsFAAOCAQEA CT2vI6 taeLTw6ZulUhLXEXYXWZu1gF8n2COjZzbZXmHxQAoZ3GtnSNwacPHAyIj f3cA9Moo552y39LUtWk wgFtQokWGK7LXglLaveNUBowOHq xk0waiIinJcgTG53 Z qnbJnTjAOG7JwVJplWUIiS1avCksrHt7heE2EGRGJALqyLZ119 PW6ogtCLUv1 X8RCTw UkIF LT sLF0bXWy4Hn38Gjwj1MVv1l76cEGOVSHyrYkN 6AMnAP58L5 IWE9tN3oac4x7jhbuNpfxazIJ8Q6l Up5U5Evfbh6N1DI0 gFCP20fMBkHwkuLfZ 2ekZoSeCgFRDlHGkr7Vv9w END CERTIFICATE issuing ca BEGIN CERTIFICATE MIIDijCCAnKgAwIBAgIUB28DoGwgGFKL7fbOu9S4FalHLn0wDQYJKoZIhvcNAQEL BQAwLzEtMCsGA1UEAxMkVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgQXV0aG9y aXR5MB4XDTE2MDkyNzAwMDgyMVoXDTI2MDkxNjE2MDg1MVowMzExMC8GA1UEAxMo VmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1dGhvcml0eTCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAOSCiSij4wy1wiMwvZt rtU3IaO6ZTn9 LfIPuGsR5 QSJk37pCZQco1LgoE rTl xu3bDovyHDmgObghC6rzVOX2Tpi7kD DOZpqxOsaS8ebYgxB XJTSxyEJuSAcpSNLqqAiZivuQXdaD0N7H3Or0awwmKE9mD I0g8CF4fPDmuuOG0ASn9fMqXVVt5tXtEqZ9yJYfNOXx3FOPjRVOZf kvSc31wCKe i KmR0AQOmToKMzq988nLqFPTi9KZB8sEU20cGFeTQFol m3FTcIru94EPD nLUn xtlLELVspYb PP3VpvRj9b DY8FGJ5nfSJl7Rkje CD4VxJpSadin3kCAwEAAaOB mTCBljAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH BAUwAwEB zAdBgNVHQ4EFgQU gM37P8oXmA972ztLfw b1eIY5nowHwYDVR0jBBgwFoAUj4YAIxRwrBy0QMRKLnD0 kVidIuYwMwYDVR0RBCwwKoIoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3Vi IEF1dGhvcml0eTANBgkqhkiG9w0BAQsFAAOCAQEAA4buJuPNJvA1kiATLw1dVU2J HPubk2Kp26Mg GwLn7Vz45Ub133JCYfF3 zXLFZZ5Yub9gWTtjScrvNfQTAbNGdQ BdnUlMmIRmfB7bfckhryR2R9byumeHATgNKZF7h8liNHI7X8tTzZGs6wPdXOLlzR TlM3m1RNK8pbSPOkfPb06w9cBRlD8OAbNtJmuypXA6tYyiiMYBhP0QLAO3i4m1ns aAjAgEjtkB1rQxW5DxoTArZ0asiIdmIcIGmsVxfDQIjFlRxAkafMs74v 5U5gbBX wsOledU0fLl8KLq8W3OXqJwhGLK65fscrP0 omPAcFgzXf L4VUADM4XhW6Xyg END CERTIFICATE ca chain BEGIN CERTIFICATE MIIDijCCAnKgAwIBAgIUB28DoGwgGFKL7fbOu9S4FalHLn0wDQYJKoZIhvcNAQEL BQAwLzEtMCsGA1UEAxMkVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgQXV0aG9y aXR5MB4XDTE2MDkyNzAwMDgyMVoXDTI2MDkxNjE2MDg1MVowMzExMC8GA1UEAxMo VmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1dGhvcml0eTCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAOSCiSij4wy1wiMwvZt rtU3IaO6ZTn9 LfIPuGsR5 QSJk37pCZQco1LgoE rTl xu3bDovyHDmgObghC6rzVOX2Tpi7kD DOZpqxOsaS8ebYgxB XJTSxyEJuSAcpSNLqqAiZivuQXdaD0N7H3Or0awwmKE9mD I0g8CF4fPDmuuOG0ASn9fMqXVVt5tXtEqZ9yJYfNOXx3FOPjRVOZf kvSc31wCKe i KmR0AQOmToKMzq988nLqFPTi9KZB8sEU20cGFeTQFol m3FTcIru94EPD nLUn xtlLELVspYb PP3VpvRj9b DY8FGJ5nfSJl7Rkje CD4VxJpSadin3kCAwEAAaOB mTCBljAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH BAUwAwEB zAdBgNVHQ4EFgQU gM37P8oXmA972ztLfw b1eIY5nowHwYDVR0jBBgwFoAUj4YAIxRwrBy0QMRKLnD0 kVidIuYwMwYDVR0RBCwwKoIoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3Vi IEF1dGhvcml0eTANBgkqhkiG9w0BAQsFAAOCAQEAA4buJuPNJvA1kiATLw1dVU2J HPubk2Kp26Mg GwLn7Vz45Ub133JCYfF3 zXLFZZ5Yub9gWTtjScrvNfQTAbNGdQ BdnUlMmIRmfB7bfckhryR2R9byumeHATgNKZF7h8liNHI7X8tTzZGs6wPdXOLlzR TlM3m1RNK8pbSPOkfPb06w9cBRlD8OAbNtJmuypXA6tYyiiMYBhP0QLAO3i4m1ns aAjAgEjtkB1rQxW5DxoTArZ0asiIdmIcIGmsVxfDQIjFlRxAkafMs74v 5U5gbBX wsOledU0fLl8KLq8W3OXqJwhGLK65fscrP0 omPAcFgzXf L4VUADM4XhW6Xyg END CERTIFICATE private key BEGIN RSA PRIVATE KEY MIIEpgIBAAKCAQEAyQGAdOCFXZkgv04pmgOgW15b4AU2Ry cAVE5juIh4Xpy9Hgx F1XthXe2Ku S4gSZnAveoSm6oea8EvCsDfRJWSOdv5kjAviL6DZWBwH 7IcH7I61 cd6u 20xYJc sBAp9boMrMcp5jL4EA 6ZYkZMU0O0TIeEJYe WBt0fQR8vWXXZJW adGjWorbbNLrnx5zdVBEf8MQQmXVk1Lt Cgy SOshOwa9LJSihz3RLBpRXDJG2Yb 47JWewxrzve EtKrTnSomsCWW6TT4NH6BmdwMXrA1JZB5poM1DXrP7I5Jw2ExxH3 lzOlGqdI l78oQsanXmoW 1bYm4NM SNCLrCWwIDAQABAoIBAQCCbHMJY1Wl8eIJ v5HG2WuHXaaHqVoavo2fXTDXwWryfx1v zz Q0YnQBH3shPAi OQCTOfpw uVWTb dUZul3 wUyfcVmUdXGCLgBY53dWna8Z8e zHwhISsqtDXV TpelUBDCNO324XIIR Cg0TLO4nyzQ ESLo6D Y2DTp8lBjMEkmKTd8CLXR2ycEoVykN98qPZm8keiLGO91 I8K7aRd8uOyQ6HUfJRlzFHSuwaLReErxGTEPI4t wVqh2nP2gGBsn3apiJ0ul6Jz NlYO5PqiwpeDk4ibhQBpicnm1jnEcynH WtGuKgMNB0M4SBRBsEguO7WoKx3o qZ iVIaPWDhAoGBAO05UBvyJpAcz ZNQlaF0EAOhoxNQ3h6 6ZYUE52PgZ DHftyJPI Y JJNclY91wn91Yk3ROrDi8gqhzA 2Lelxo1kuZDu m bpzhVUdJia7tZDNzRIhI 24eP2GdochooOZ0qjvrik4kuX43amBhQ4RHsBjmX5CnUlL5ZULs8v2xnAoGBANjq VLAwiIIqJZEC6BuBvVYKaRWkBCAXvQ3j OqxHRYu3P68PZ58Q7HrhrCuyQHTph2v fzfmEMPbSCrFIrrMRmjUG8wopL7GjZjFl8HOBHFwzFiz CT5DEC IJIRkp4HM8F PAzjB2wCdRdSjLTD5ph0 xQIg5xfln7D wqU0QHtAoGBAKkLF0 ivaIiNftw0J3x WxXag yErlizYpIGCqvuzII6lLr9YdoViT eJYrmb9Zm0HS9biCu2zuwDijRSBIL RieyF40opUaKoi3 0JMtDwTtO2MCd8qaCH3QfkgqAG0tTuj1Q8 6F2JA myKYamq MMhhpYny9 7rAlemM8ZJIqtvAoGBAKOI3zpKDNCdd98A4v7B7H2usZUIJ7gOTZDo XqiNyRENWb2PK6GNq e6SrxvuclvyKA zFnXULJoYtsj7tAH69lieGaOCc5uoRgZ eBU7 euMj McE6vEO3GgJawaJYCQi3uJMjvA bp7i81 hehOfU5ZfmmbFaZSBoMh u U5Vu3tAoGBANnBIbHfD3E7rqnqdpH1oRRHLA1VdghzEKgyUTPHNDzPJG87RY3c rRqeXepblud3qFjD60xS9BzcBijOvZ4 KHk6VIMpkyqoeNVFCJbBVCw JGMp88 v e9t 2iwryh5 rnq pg6anmgwHldptJc1XEFZA2UUQ89RP7kOGQF6IkIS END RSA PRIVATE KEY private key type rsa serial number 3e 20 32 c6 af a7 20 4e b1 95 67 fb 86 bc cb 90 f4 31 b6 f3 Vault has now generated a new set of credentials using the example dot com role configuration Here we see the dynamically generated private key and certificate The issuing CA certificate and CA trust chain are returned as well The CA Chain returns all the intermediate authorities in the trust chain The root authority is not included since that will usually be trusted by the underlying OS Tutorial Refer to the Build Your Own Certificate Authority CA vault tutorials secrets management pki engine guide for a step by step tutorial Have a look at the PKI Secrets Engine with Managed Keys vault tutorials enterprise managed key pki for more about how to use externally managed keys with PKI API The PKI secrets engine has a full HTTP API Please see the PKI secrets engine API vault api docs secret pki for more details |
vault page title Certificate Issuance External Policy CIEPS PKI Secrets Engines Vault PKI Secrets Engine when communicating with the Certificate Issuance PKI secrets engine Certificate Issuance External Policy Service CIEPS EnterpriseAlert inline true An overview of the Certificate Issuance External Policy CIEPS protocol layout docs This document covers high level architecture and service APIs used by the | ---
layout: docs
page_title: Certificate Issuance External Policy (CIEPS) | PKI - Secrets Engines
description: An overview of the Certificate Issuance External Policy (CIEPS) protocol
---
# PKI secrets engine - Certificate Issuance External Policy Service (CIEPS) <EnterpriseAlert inline="true" />
This document covers high-level architecture and service APIs used by the
Vault PKI Secrets Engine when communicating with the Certificate Issuance
External Policy Service (CIEPS) <EnterpriseAlert inline="true" />.
## What is Certificate Issuance External Policy Service (CIEPS)?
Hashicorp Vault's PKI Secrets Engine has a mechanism for issuing leaf
certificates with arbitrary structure: [`/pki/sign-verbatim`](/vault/api-docs/secret/pki#sign-verbatim).
This requires an organization to run an application/user-accessible service
for authenticating, authorizing, and validating certificate issuance requests
(potentially handling key pair generation as well), before asking PKI to sign
the resulting CSR and leaf certificate with its own highly-privileged Vault
token. If any attribute is missing from the original requester's CSR, the
original service must reject the request as `sign-verbatim` does not give the
controlling service the ability to modify the request.
The Certificate Issuance External Policy Service (CIEPS) <EnterpriseAlert inline="true" />
protocol solves this by placing the validation and certificate templating
gated behind the PKI, solving:
1. **Auditing**, so the original requester is still identified and both the
original request and subsequent response are tracked.
2. **Central access**, so applications only need to use a new URL for
requesting certificates.
3. **Certificate modification**, so customization of the requester's
submission can be exposed to this external service.
4. **External validation**, when compared to the Role-based system, as the
CIEPS implementation can reach out to customer-defined external systems
for validation.
Either of these two mechanisms allow an organization to leverage the Vault
PKI Secrets Engine to build their own flexible issuance control architecture,
leveraging Vault as a PKI-as-a-Service platform. However, CIEPS grants far
greater control to the organization than the `sign-verbatim` approach.
### Custom policy with `sign-verbatim`
With `sign-verbatim`, the policy validation service must sit in front of
Vault, processing requests from the user (which cannot use Vault
authentication and needs to authenticate themselves separately to this
service). This RA service then handles its own authentication to Vault,
which provides the signing capabilities via the PKI plugin.
When the application retains control over its own key material by providing
a CSR, the policy service cannot modify the requested CSR and thus cannot
modify the resulting certificate. It can only approve or deny requests
without allowing operators to hide implementation details from calling
applications. This is because PKI's `sign-verbatim` endpoint lacks the
ability for the Vault API caller (in this case, the fronting policy service)
to modify the certificate independent of the provided CSR.
If, however, the policy service can control key material (and this is an
acceptable risk to the organization), the policy service could modify requests
on behalf of the calling application. However, this still requires the
external application to know how to authenticate to this external policy
service.
Additionally, to ensure compatibility with Vault, this policy service (and its
developers) would need to add support for the ACME protocol. For any new
protocols Vault supports in the future, this service would also need to
implement support to retain compatibility.
### Custom policy with Certificate Issuance External Policy Service (CIEPS)
With CIEPS, users still authenticate to Vault and use the normal request
workflow to sign and issue certificates, including via ACME. However,
Vault's PKI Engine reaches out to the configured CIEPS implementation to
validate and template the requested certificate, transparently from the
calling application.
Notably, the application can opt to either retain full control over its key
material or delegate key creation to the trusted Vault service, with no impact
on the functionality CIEPS can provide. The CIEPS service can be scoped to
respond to requests from either a single PKI mount or multiple, getting
information about the requesting user and the Vault PKI instance from the
CIEPS messages.
Because the CIEPS service only needs knowledge of validating requests and
templating the final certificate structure, its developers need only be
concerned with the business policy logic and not broader PKI concerns (such
as generating key material or re-implementing support for other issuance
protocols).
## Certificate Issuance External Policy Service (CIEPS) webhook format
The CIEPS protocol is a REST-based, optionally mTLS protected webhook. The
external service configuration specifies the single URL that Vault will POST
the formatted CIEPS request to. When the CIEPS service is unavailable (either
due to misconfiguration or outage), Vault will reject the request and it is
up to the client to retry the request at a later time.
For convenience, Go versions of these structs are available [from the Vault
SDK](https://github.com/hashicorp/vault/blob/main/sdk/helper/certutil/cieps.go).
### Vault to CIEPS request format
This document outlines CIEPS request/response version 1.
Using the `application/json` content type, Vault will post the following
request body as a JSON object:
- `request_version` `(int: 1)` - The version of the CIEPS request sent by
Vault; a compatible response format is expected.
- `request_uuid` `(string)` - A random UUID which serves to identify this
request. This value must be sent in the response.
- `synchronous` `(bool: true)` - A boolean indicating whether the request is
synchronous or not. Presently set to true; no asynchronous response is
understood.
- `user_request_key_values` `(map[string]interface{})` - The unvalidated
request parameters sent by the user. It is up to the CIEPS service to
validate these prior to using them. The following fields may be present,
including any other fields submitted by the user:
- `csr` `(string)` - A PEM format CSR submitted either by the client (
in the case of `/sign` or ACME requests) or on the client's behalf
(in the case of `/issue` requests, where key material is generated
by Vault).
- `identity_request_key_values` `(map[string]interface{})` - Values related
to the user's identity. When the request type is ACME, this value is not
populated. These are:
- `entity_id` `(string)` - The entity identifier from the request after
authentication.
- `entity` `(map[string]interface{})` - The entire resolved `logical.Entity`
of the user after authentication; subject to change by the
`entity_jmsepath` parameter in the configuration.
- `groups` `([]map[string]interface{})` - The set of resolved
`logical.Groups` of the user after authentication; subject to change by
the `group_jmsepath` parameter in the configuration.
~> **Note**: in the event that the direct token backend or a root token is
used, entity information may not exist. In either case,
`identity_request_key_values` will be omitted.
- `acme_request_key_values` `(map[string]interface{})` - Values related to
ACME authorizations challenges attached to the finished order. Only present
when the request type is ACME. These are:
- `authorizations` `(map[string]interface{})` - Authorizations and
challenges solved by the client to move this order to the finalization
state.
- `account` `(map[string]interface{})` - Information related to the ACME
account issuing the request. These are:
- `id` `(string)` - The UUID of the ACME account.
- `directory` `(string)` - The path to the ACME directory requested by
this account.
- `contact` `([]string)` - Unverified contact information submitted by
the requesting ACME account on creation.
- `created_date` `(string: RFC 3999 format)` - Timestamp when the account
was created.
- `eab` `(map[string]interface{}, optional)` - When present, the details
of the EAB used to authorize this account via Vault authentication. If
not present, this ACME account was created without EAB bindings.
- `key_id` `(string)` - Identifier of the EAB binding used by this
account.
- `key_type` `(string)` - Key type of the EAB binding used by this
account.
- `created_date` `(string: RFC 3999 format)` - Timestamp when the
account was created.
- `vault_request_values` `(map[string]interface{})` - Request value validated
or created by Vault. These have higher trust than the unvalidated
`user_request_key_values`. These are:
- `policy_name` `(string: "")` - The optional policy name specified by the
requester. When the issuance mode is not ACME (or if it was ACME and EAB
was enforced), this has been validated by Vault's ACL system.
- `mount` `(string)` - The request's mount path as known by the PKI plugin.
- `namespace` `(string)` - The request's namespace the mount path exists
within as known by the PKI plugin.
- `vault_is_performance_standby` `(bool)` - Asserted when this requesting
node is a standby node. When the service indicates storage is required in
its response, Vault will forward the user's HTTP request up to an active
node, requiring it to re-submit the CIEPS request. In this case, if the
service knows it must always store certificates and sees a request from
a standby node, it can skip policy and template evaluation or cache the
results for a second pass.
- `vault_is_performance_secondary` `(bool)` - Asserted when this requesting
node is from a performance secondary versus the primary cluster.
- `issuance_mode` `(string: "sign", "issue", "ica", or "acme")` - The type
of the request: whether a REST call to `/external-policy/sign(/:policy)`,
to `/external-policy/issue(/:policy)`, `/external-policy/sign-intermediate(/:policy)`,
or an ACME request, respectively.
- `vault_generated_private_key` `(bool)` - Whether or not Vault generated
the key material behind this request. Set to true when
`issuance_mode="issue"` only presently.
- `requested_issuer_name` `(string)` - Name of the user's requested issuer;
can be changed by modifying the response `issuer_ref` value.
- `requested_issuer_id` `(string)` - UUID of the user's requested issuer;
can be changed by modifying the response `issuer_ref` value.
- `requested_issuer_cert` `(string)` - PEM format certificate of the user's
requested issuer; can be changed by modifying the response `issuer_ref`
value.
- `requested_issuance_config` `(map[string]interface{})` - Configuration
used for leaf certificate issuance. These are:
- `aia_values` `(map[string]interface{})` - AIA values (CA, CRL, and
OCSP) for the suggested issuer. These may differ from the actual values
used for issuance of this request if `issuer_ref` is set on the response.
- `leaf_not_after_behavior` `(string: "err", "truncate", or "permit")` - leaf
validity period behavior for the suggested issuer.
- `mount_default_ttl` `(string)` - Suggested default TTL set on mount tuning.
- `mount_max_ttl` `(string)` - Suggested maximum TTL set on the mount tuning.
### CIEPS to Vault response format
The CIEPS engine must reply to this POST response with a `200 OK` status,
regardless of whether a certificate should be issued or not. Redirects will
not be followed by Vault; any proxy or load balancing functionality should be
strictly transparent to the caller. Any verbatim message returned by a non-200
status code will not be returned, either in Vault server logs or to the user.
In the response to the above request, only one of the `certificate` or `error`
fields should be specified. In the event both `certificate` and `error` are
present, the `error` will be appended to the returned `warnings` and the
`certificate` will be issued.
Using the `application/json` content type, the server should reply with the
following JSON object:
- `request_uuid` `(string)` - The random UUID which the server used to
identify this request.
- `error` `(string, optional)` - The error message to be returned to the
user about why their request failed. Only one of the `error` or
`certificate` response parameters should be specified.
- `warnings` `([]string, optional)` - Optional warnings to be returned to the user
about minor issues with their request.
- `certificate` `(string, optional)` - A PEM format certificate to be signed
by the Vault service. Only one of the `error` or `certificate` response
parameters should be specified.
- `issuer_ref` `(string)` - The issuer reference to use to sign this request.
If the user's issuer choice (in `requested_issuer_id`) is OK, this must
be set in this field.
- `store_certificate` `(bool: false)` - Whether or not to store the signed
certificate.
- `generate_lease` `(bool: false)` - Whether or not Vault should generate an
associated lease for the certificate. Note that to generate a lease,
`store_certificate` also needs to be set to `true`, otherwise no lease
will be generated.
The certificate's signature will be ignored and replaced by a signature created
by the specified issuer. If a signature algorithm compatible with this issuer
is specified on the certificate, it will be preserved; otherwise, the default
signature algorithm for this issuer's key type will be used.
The certificate's AIA information will be replaced by the information from the
specified issuer, if present, else the global AIA URLs will be set, replacing
the AIA URIs and CRL distribution point extensions. Additionally, the
Authority Key Identifier extension will be replaced by the issuer's Subject
Key Identifier extension value as mandated by RFC 5280.
## Tutorial
Refer to the following tutorials for PKI secrets engine usage examples:
- [Build Your Own Certificate Authority (CA)](/vault/tutorials/secrets-management/pki-engine)
- [Build Certificate Authority (CA) in Vault with an offline Root](/vault/tutorials/secrets-management/pki-engine-external-ca)
- [Enable ACME with PKI secrets engine](/vault/tutorials/secrets-management/pki-acme-caddy)
- [PKI Secrets Engine with Managed Keys](/vault/tutorials/enterprise/managed-key-pki)
- [PKI Unified CRL and OCSP With Cross Cluster
Revocation](/vault/tutorials/secrets-management/pki-unified-crl-ocsp-cross-cluster)
- [Configure Vault as a Certificate Manager in Kubernetes with
Helm](/vault/tutorials/kubernetes/kubernetes-cert-manager)
- [Generate mTLS Certificates for Nomad using
Vault](/vault/tutorials/secrets-management/vault-pki-nomad)
## API
The PKI secrets engine has a full HTTP API. Please see the
[PKI secrets engine API](/vault/api-docs/secret/pki) for more
details. | vault | layout docs page title Certificate Issuance External Policy CIEPS PKI Secrets Engines description An overview of the Certificate Issuance External Policy CIEPS protocol PKI secrets engine Certificate Issuance External Policy Service CIEPS EnterpriseAlert inline true This document covers high level architecture and service APIs used by the Vault PKI Secrets Engine when communicating with the Certificate Issuance External Policy Service CIEPS EnterpriseAlert inline true What is Certificate Issuance External Policy Service CIEPS Hashicorp Vault s PKI Secrets Engine has a mechanism for issuing leaf certificates with arbitrary structure pki sign verbatim vault api docs secret pki sign verbatim This requires an organization to run an application user accessible service for authenticating authorizing and validating certificate issuance requests potentially handling key pair generation as well before asking PKI to sign the resulting CSR and leaf certificate with its own highly privileged Vault token If any attribute is missing from the original requester s CSR the original service must reject the request as sign verbatim does not give the controlling service the ability to modify the request The Certificate Issuance External Policy Service CIEPS EnterpriseAlert inline true protocol solves this by placing the validation and certificate templating gated behind the PKI solving 1 Auditing so the original requester is still identified and both the original request and subsequent response are tracked 2 Central access so applications only need to use a new URL for requesting certificates 3 Certificate modification so customization of the requester s submission can be exposed to this external service 4 External validation when compared to the Role based system as the CIEPS implementation can reach out to customer defined external systems for validation Either of these two mechanisms allow an organization to leverage the Vault PKI Secrets Engine to build their own flexible issuance control architecture leveraging Vault as a PKI as a Service platform However CIEPS grants far greater control to the organization than the sign verbatim approach Custom policy with sign verbatim With sign verbatim the policy validation service must sit in front of Vault processing requests from the user which cannot use Vault authentication and needs to authenticate themselves separately to this service This RA service then handles its own authentication to Vault which provides the signing capabilities via the PKI plugin When the application retains control over its own key material by providing a CSR the policy service cannot modify the requested CSR and thus cannot modify the resulting certificate It can only approve or deny requests without allowing operators to hide implementation details from calling applications This is because PKI s sign verbatim endpoint lacks the ability for the Vault API caller in this case the fronting policy service to modify the certificate independent of the provided CSR If however the policy service can control key material and this is an acceptable risk to the organization the policy service could modify requests on behalf of the calling application However this still requires the external application to know how to authenticate to this external policy service Additionally to ensure compatibility with Vault this policy service and its developers would need to add support for the ACME protocol For any new protocols Vault supports in the future this service would also need to implement support to retain compatibility Custom policy with Certificate Issuance External Policy Service CIEPS With CIEPS users still authenticate to Vault and use the normal request workflow to sign and issue certificates including via ACME However Vault s PKI Engine reaches out to the configured CIEPS implementation to validate and template the requested certificate transparently from the calling application Notably the application can opt to either retain full control over its key material or delegate key creation to the trusted Vault service with no impact on the functionality CIEPS can provide The CIEPS service can be scoped to respond to requests from either a single PKI mount or multiple getting information about the requesting user and the Vault PKI instance from the CIEPS messages Because the CIEPS service only needs knowledge of validating requests and templating the final certificate structure its developers need only be concerned with the business policy logic and not broader PKI concerns such as generating key material or re implementing support for other issuance protocols Certificate Issuance External Policy Service CIEPS webhook format The CIEPS protocol is a REST based optionally mTLS protected webhook The external service configuration specifies the single URL that Vault will POST the formatted CIEPS request to When the CIEPS service is unavailable either due to misconfiguration or outage Vault will reject the request and it is up to the client to retry the request at a later time For convenience Go versions of these structs are available from the Vault SDK https github com hashicorp vault blob main sdk helper certutil cieps go Vault to CIEPS request format This document outlines CIEPS request response version 1 Using the application json content type Vault will post the following request body as a JSON object request version int 1 The version of the CIEPS request sent by Vault a compatible response format is expected request uuid string A random UUID which serves to identify this request This value must be sent in the response synchronous bool true A boolean indicating whether the request is synchronous or not Presently set to true no asynchronous response is understood user request key values map string interface The unvalidated request parameters sent by the user It is up to the CIEPS service to validate these prior to using them The following fields may be present including any other fields submitted by the user csr string A PEM format CSR submitted either by the client in the case of sign or ACME requests or on the client s behalf in the case of issue requests where key material is generated by Vault identity request key values map string interface Values related to the user s identity When the request type is ACME this value is not populated These are entity id string The entity identifier from the request after authentication entity map string interface The entire resolved logical Entity of the user after authentication subject to change by the entity jmsepath parameter in the configuration groups map string interface The set of resolved logical Groups of the user after authentication subject to change by the group jmsepath parameter in the configuration Note in the event that the direct token backend or a root token is used entity information may not exist In either case identity request key values will be omitted acme request key values map string interface Values related to ACME authorizations challenges attached to the finished order Only present when the request type is ACME These are authorizations map string interface Authorizations and challenges solved by the client to move this order to the finalization state account map string interface Information related to the ACME account issuing the request These are id string The UUID of the ACME account directory string The path to the ACME directory requested by this account contact string Unverified contact information submitted by the requesting ACME account on creation created date string RFC 3999 format Timestamp when the account was created eab map string interface optional When present the details of the EAB used to authorize this account via Vault authentication If not present this ACME account was created without EAB bindings key id string Identifier of the EAB binding used by this account key type string Key type of the EAB binding used by this account created date string RFC 3999 format Timestamp when the account was created vault request values map string interface Request value validated or created by Vault These have higher trust than the unvalidated user request key values These are policy name string The optional policy name specified by the requester When the issuance mode is not ACME or if it was ACME and EAB was enforced this has been validated by Vault s ACL system mount string The request s mount path as known by the PKI plugin namespace string The request s namespace the mount path exists within as known by the PKI plugin vault is performance standby bool Asserted when this requesting node is a standby node When the service indicates storage is required in its response Vault will forward the user s HTTP request up to an active node requiring it to re submit the CIEPS request In this case if the service knows it must always store certificates and sees a request from a standby node it can skip policy and template evaluation or cache the results for a second pass vault is performance secondary bool Asserted when this requesting node is from a performance secondary versus the primary cluster issuance mode string sign issue ica or acme The type of the request whether a REST call to external policy sign policy to external policy issue policy external policy sign intermediate policy or an ACME request respectively vault generated private key bool Whether or not Vault generated the key material behind this request Set to true when issuance mode issue only presently requested issuer name string Name of the user s requested issuer can be changed by modifying the response issuer ref value requested issuer id string UUID of the user s requested issuer can be changed by modifying the response issuer ref value requested issuer cert string PEM format certificate of the user s requested issuer can be changed by modifying the response issuer ref value requested issuance config map string interface Configuration used for leaf certificate issuance These are aia values map string interface AIA values CA CRL and OCSP for the suggested issuer These may differ from the actual values used for issuance of this request if issuer ref is set on the response leaf not after behavior string err truncate or permit leaf validity period behavior for the suggested issuer mount default ttl string Suggested default TTL set on mount tuning mount max ttl string Suggested maximum TTL set on the mount tuning CIEPS to Vault response format The CIEPS engine must reply to this POST response with a 200 OK status regardless of whether a certificate should be issued or not Redirects will not be followed by Vault any proxy or load balancing functionality should be strictly transparent to the caller Any verbatim message returned by a non 200 status code will not be returned either in Vault server logs or to the user In the response to the above request only one of the certificate or error fields should be specified In the event both certificate and error are present the error will be appended to the returned warnings and the certificate will be issued Using the application json content type the server should reply with the following JSON object request uuid string The random UUID which the server used to identify this request error string optional The error message to be returned to the user about why their request failed Only one of the error or certificate response parameters should be specified warnings string optional Optional warnings to be returned to the user about minor issues with their request certificate string optional A PEM format certificate to be signed by the Vault service Only one of the error or certificate response parameters should be specified issuer ref string The issuer reference to use to sign this request If the user s issuer choice in requested issuer id is OK this must be set in this field store certificate bool false Whether or not to store the signed certificate generate lease bool false Whether or not Vault should generate an associated lease for the certificate Note that to generate a lease store certificate also needs to be set to true otherwise no lease will be generated The certificate s signature will be ignored and replaced by a signature created by the specified issuer If a signature algorithm compatible with this issuer is specified on the certificate it will be preserved otherwise the default signature algorithm for this issuer s key type will be used The certificate s AIA information will be replaced by the information from the specified issuer if present else the global AIA URLs will be set replacing the AIA URIs and CRL distribution point extensions Additionally the Authority Key Identifier extension will be replaced by the issuer s Subject Key Identifier extension value as mandated by RFC 5280 Tutorial Refer to the following tutorials for PKI secrets engine usage examples Build Your Own Certificate Authority CA vault tutorials secrets management pki engine Build Certificate Authority CA in Vault with an offline Root vault tutorials secrets management pki engine external ca Enable ACME with PKI secrets engine vault tutorials secrets management pki acme caddy PKI Secrets Engine with Managed Keys vault tutorials enterprise managed key pki PKI Unified CRL and OCSP With Cross Cluster Revocation vault tutorials secrets management pki unified crl ocsp cross cluster Configure Vault as a Certificate Manager in Kubernetes with Helm vault tutorials kubernetes kubernetes cert manager Generate mTLS Certificates for Nomad using Vault vault tutorials secrets management vault pki nomad API The PKI secrets engine has a full HTTP API Please see the PKI secrets engine API vault api docs secret pki for more details |
vault To successfully deploy this secrets engine there are a number of important The PKI secrets engine for Vault generates TLS certificates page title PKI Secrets Engines Considerations considerations to be aware of as well as some preparatory steps that should be layout docs PKI secrets engine considerations | ---
layout: docs
page_title: 'PKI - Secrets Engines: Considerations'
description: The PKI secrets engine for Vault generates TLS certificates.
---
# PKI secrets engine - considerations
To successfully deploy this secrets engine, there are a number of important
considerations to be aware of, as well as some preparatory steps that should be
undertaken. You should read all of these _before_ using this secrets engine or
generating the CA to use with this secrets engine.
## Table of contents
- [Be Careful with Root CAs](#be-careful-with-root-cas)
- [Managed Keys](#managed-keys)
- [One CA Certificate, One Secrets Engine](#one-ca-certificate-one-secrets-engine)
- [Always Configure a Default Issuer](#always-configure-a-default-issuer)
- [Key Types Matter](#key-types-matter)
- [Cluster Performance and Key Types](#cluster-performance-and-key-types)
- [Use a CA Hierarchy](#use-a-ca-hierarchy)
- [Cross-Signed Intermediates](#cross-signed-intermediates)
- [Cluster URLs are Important](#cluster-urls-are-important)
- [Automate Rotation with ACME](#automate-rotation-with-acme)
- [ACME Stores Certificates](#acme-stores-certificates)
- [ACME Role Restrictions Require EAB](#acme-role-restrictions-require-eab)
- [ACME and the Public Internet](#acme-and-the-public-internet)
- [ACME Errors are in Server Logs](#acme-errors-are-in-server-logs)
- [ACME Security Considerations](#acme-security-considerations)
- [ACME and Client Counting](#acme-and-client-counting)
- [Keep Certificate Lifetimes Short, For CRL's Sake](#keep-certificate-lifetimes-short-for-crls-sake)
- [NotAfter Behavior on Leaf Certificates](#notafter-behavior-on-leaf-certificates)
- [Cluster Performance and Quantity of Leaf Certificates](#cluster-performance-and-quantity-of-leaf-certificates)
- [You must configure issuing/CRL/OCSP information _in advance_](#you-must-configure-issuingcrlocsp-information-_in-advance_)
- [Distribution of CRLs and OCSP](#distribution-of-crls-and-ocsp)
- [Automate CRL Building and Tidying](#automate-crl-building-and-tidying)
- [Spectrum of Revocation Support](#spectrum-of-revocation-support)
- [What Are Cross-Cluster CRLs?](#what-are-cross-cluster-crls)
- [Issuer Subjects and CRLs](#issuer-subjects-and-crls)
- [Automate Leaf Certificate Renewal](#automate-leaf-certificate-renewal)
- [Safe Minimums](#safe-minimums)
- [Token Lifetimes and Revocation](#token-lifetimes-and-revocation)
- [Safe Usage of Roles](#safe-usage-of-roles)
- [Telemetry](#telemetry)
- [Auditing](#auditing)
- [Role-Based Access](#role-based-access)
- [Replicated DataSets](#replicated-datasets)
- [Cluster Scalability](#cluster-scalability)
- [PSS Support](#pss-support)
- [Issuer Storage Migration Issues](#issuer-storage-migration-issues)
- [Issuer Constraints Enforcement](#issuer-constraints-enforcement)
- [Tutorial](#tutorial)
- [API](#api)
## Be careful with root CAs
Vault storage is secure, but not as secure as a piece of paper in a bank vault.
It is, after all, networked software. If your root CA is hosted outside of
Vault, don't put it in Vault as well; instead, issue a shorter-lived
intermediate CA certificate and put this into Vault. This aligns with industry
best practices.
Since 0.4, the secrets engine supports generating self-signed root CAs and
creating and signing CSRs for intermediate CAs. In each instance, for security
reasons, the private key can _only_ be exported at generation time, and the
ability to do so is part of the command path (so it can be put into ACL
policies).
If you plan on using intermediate CAs with Vault, it is suggested that you let
Vault create CSRs and do not export the private key, then sign those with your
root CA (which may be a second mount of the `pki` secrets engine).
### Managed keys
Since 1.10, Vault Enterprise can access private key material in a
[_managed key_](/vault/docs/enterprise/managed-keys). In this case, Vault never sees the
private key, and the external KMS or HSM performs certificate signing operations.
Managed keys are configured by selecting the `kms` type when generating a root
or intermediate.
## One CA certificate, one secrets engine
Since Vault 1.11.0, the PKI Secrets Engine supports multiple issuers in a single
mount. However, in order to simplify the configuration, it is _strongly_
recommended that operators limit a mount to a single issuer. If you want to issue
certificates from multiple disparate CAs, mount the PKI secrets engine at multiple
mount points with separate CA certificates in each.
The rationale for separating mounts is to simplify permissions management:
very few individuals need access to perform operations with the root, but
many need access to create leaves. The operations on a root should generally
be limited to issuing and revoking intermediate CAs, which is a highly
privileged operation; it becomes much easier to audit these operations when
they're in a separate mount than if they're mixed in with day-to-day leaf
issuance.
A common pattern is to have one mount act as your root CA and to use this CA
only to sign intermediate CA CSRs from other PKI secrets engines.
To keep old CAs active, there's two approaches to achieving rotation:
1. Use multiple secrets engines. This allows a fresh start, preserving the
old issuer and CRL. Vault ACL policy can be updated to deny new issuance
under the old mount point and roles can be re-evaluated before being
imported into the new mount point.
2. Use multiple issuers in the same mount point. The usage of the old issuer
can be restricted to CRL signing, and existing roles and ACL policy can be
kept as-is. This allows cross-signing within the same mount, and consumers
of the mount won't have to update their configuration. Once the transitional
period for this rotation has completed and all past issued certificate have
expired, it is encouraged to fully remove the old issuer and any unnecessary
cross-signed issuers from the mount point.
Another suggested use case for multiple issuers in the same mount is splitting
issuance by TTL lifetime. For short-lived certificates, an intermediate
stored in Vault will often out-perform a HSM-backed intermediate. For
longer-lived certificates, however, it is often important to have the
intermediate key material secured throughout the lifetime of the end-entity
certificate. This means that two intermediates in the same mount -- one backed
by the HSM and one backed by Vault -- can satisfy both use cases. Operators
can make roles setting maximum TTLs for each issuer and consumers of the
mount can decide which to use.
### Always configure a default issuer
For backwards compatibility, [the default issuer](/vault/api-docs/secret/pki#read-issuers-configuration)
is used to service PKI endpoints without an explicit issuer (either via path
selection or role-based selection). When certificates are revoked and their
issuer is no longer part of this PKI mount, Vault places them on the default
issuer's CRL. This means maintaining a default issuer is important for both
backwards compatibility for issuing certificates and for ensuring revoked
certificates land on a CRL.
## Key types matter
Certain key types have impacts on performance. Signing certificates from a RSA
key will be slower than issuing from an ECDSA or Ed25519 key. Key generation
(using `/issue/:role` endpoints) using RSA keys will also be slow: RSA key
generation involves finding suitable random primes, whereas Ed25519 keys can
be random data. As the number of bits goes up (RSA 2048 -> 4096 or ECDSA
P-256 -> P-521), signature times also increases.
This matters in both directions: not only is issuance more expensive,
but validation of the corresponding signature (in say, TLS handshakes) will
also be more expensive. Careful consideration of both issuer and issued key
types can have meaningful impacts on performance of not only Vault, but
systems using these certificates.
### Cluster performance and key types
The [benchmark-vault](https://github.com/hashicorp/vault-benchmark) project
can be used to measure the performance of a Vault PKI instance. In general,
some considerations to be aware of:
- RSA key generation is much slower and highly variable than EC key
generation. If performance and throughput are a necessity, consider using
EC keys (including NIST P-curves and Ed25519) instead of RSA.
- Key signing requests (via `/pki/sign`) will be faster than (`/pki/issue`),
especially for RSA keys: this removes the necessity for Vault to generate
key material and can sign the key material provided by the client. This
signing step is common between both endpoints, so key generation is pure
overhead if the client has a sufficiently secure source of entropy.
- The CA's key type matters as well: using a RSA CA will result in a RSA
signature and takes longer than a ECDSA or Ed25519 CA.
- Storage is an important factor: with [BYOC Revocation](/vault/api-docs/secret/pki#revoke-certificate),
using `no_store=true` still gives you the ability to revoke certificates
and audit logs can be used to track issuance. Clusters using a remote
storage (like Consul) over a slow network and using `no_store=false`
or `no_store_cert_metadata=false` along with specifying metadata on issuance, will
result in additional latency on issuance. Adding leases for every issued
certificate compounds the problem.
- Storing too many certificates results in longer `LIST /pki/certs` time,
including the time to tidy the instance. As such, for large scale
deployments (>= 250k active certificates) it is recommended to use audit
logs to track certificates outside of Vault.
As a general comparison on unspecified hardware, using `benchmark-vault` for
`30s` on a local, single node, raft-backed Vault instance:
- Vault can issue 300k certificates using EC P-256 for CA & leaf keys and
without storage.
- But switching to storing these leaves drops that number to 65k, and only
20k with leases.
- Using large, expensive RSA-4096 bit keys, Vault can only issue 160 leaves,
regardless of whether or not storage or leases were used. The 95% key
generation time is above 10s.
- In comparison, using P-521 keys, Vault can issue closer to 30k leaves
without leases and 18k with leases.
These numbers are for example only, to represent the impact different key types
can have on PKI cluster performance.
The use of ACME adds additional latency into these numbers, both because
certificates need to be stored and because challenge validation needs to
be performed.
## Use a CA hierarchy
It is generally recommended to use a hierarchical CA setup, with a root
certificate which issues one or more intermediates (based on usage), which
in turn issue the leaf certificates.
This allows stronger storage or policy guarantees around [protection of the
root CA](#be-careful-with-root-cas), while letting Vault manage the
intermediate CAs and issuance of leaves. Different intermediates might be
issued for different usage, such as VPN signing, Email signing, or testing
versus production TLS services. This helps to keep CRLs limited to specific
purposes: for example, VPN services don't care about the revoked set of
email signing certificates if they're using separate certificates and
different intermediates, and thus don't need both CRL contents. Additionally,
this allows higher risk intermediates (such as those issuing longer-lived
email signing certificates) to have HSM-backing without impacting the
performance of easier-to-rotate intermediates and certificates (such as
TLS intermediates).
Vault supports the use of both the [`allowed_domains` parameter on
Roles](/vault/api-docs/secret/pki#allowed_domains) and the [`permitted_dns_domains`
parameter to set the Name Constraints extension](/vault/api-docs/secret/pki#permitted_dns_domains)
on root and intermediate generation. This allows for several layers of
separation of concerns between TLS-based services.
### Cross-Signed intermediates
When cross-signing intermediates from two separate roots, two separate
intermediate issuers will exist within the Vault PKI mount. In order to
correctly serve the cross-signed chain on issuance requests, the
`manual_chain` override is required on either or both intermediates. This
can be constructed in the following order:
- this issuer (`self`)
- this root
- the other copy of this intermediate
- the other root
All requests to this issuer for signing will now present the full cross-signed
chain.
## Cluster URLs are important
In Vault 1.13, support for [templated AIA
URLs](/vault/api-docs/secret/pki#enable_aia_url_templating-1)
was added. With the [per-cluster URL
configuration](/vault/api-docs/secret/pki#set-cluster-configuration) pointing
to this Performance Replication cluster, AIA information will point to the
cluster that issued this certificate automatically.
In Vault 1.14, with ACME support, the same configuration is used for allowing
ACME clients to discover the URL of this cluster.
~> **Warning**: It is important to ensure that this configuration is
up to date and maintained correctly, always pointing to the node's
PR cluster address (which may be a Load Balanced or a DNS Round-Robbin
address). If this configuration is not set on every Performance Replication
cluster, certificate issuance (via REST and/or via ACME) will fail.
## Automate rotation with ACME
In Vault 1.14, support for the [Automatic Certificate Management Environment
(ACME)](https://datatracker.ietf.org/doc/html/rfc8555) protocol has been
added to the PKI Engine. This is a standardized way to handle validation,
issuance, rotation, and revocation of server certificates.
Many ecosystems, from web servers like Caddy, Nginx, and Apache, to
orchestration environments like Kubernetes (via cert-manager) natively
support issuance via the ACME protocol. For deployments without native
support, stand-alone tools like certbot support fetching and renewing
certificates on behalf of consumers. Vault's PKI Engine only includes server
support for ACME; no client functionality has been included.
~> Note: Vault's PKI ACME server caps the certificate's validity at 90 days
maximum by default, overridable using the ACME config max_ttl parameter.
Shorter validity durations can be set via limiting the role's TTL to
be under the global ACME configured limit.
Aligning with Let's Encrypt, we do not support the optional `NotBefore`
and `NotAfter` order request parameters.
### ACME stores certificates
Because ACME requires stored certificates in order to function, the notes
[below about automating tidy](#automate-crl-building-and-tidying) are
especially important for the long-term health of the PKI cluster. ACME also
introduces additional resource types (accounts, orders, authorizations, and
challenges) that must be tidied via [the `tidy_acme=true`
option](/vault/api-docs/secret/pki#tidy). Orders, authorizations, and
challenges are [cleaned up based on the
`safety_buffer`](/vault/api-docs/secret/pki#safety_buffer)
parameter, but accounts can live longer past their last issued certificate
by controlling the [`acme_account_safety_buffer`
parameter](/vault/api-docs/secret/pki#acme_account_safety_buffer).
As a consequence of the above, and like the discussions in the [Cluster
Scalability](#cluster-scalability) section, because these roles have
`no_store=false` set, ACME can only issue certificates on the active nodes
of PR clusters; standby nodes, if contacted, will transparently forward
all requests to the active node.
### ACME role restrictions require EAB
Because ACME by default has no external authorization engine and is
unauthenticated from a Vault perspective, the use of roles with ACME
in the default configuration are of limited value as any ACME client
can request certificates under any role by proving possession of the
requested certificate identifiers.
To solve this issue, there are two possible approaches:
1. Use a restrictive [`allowed_roles`, `allowed_issuers`, and
`default_directory_policy` ACME
configuration](/vault/api-docs/secret/pki#set-acme-configuration)
to let only a single role and issuer be used. This prevents user
choice, allowing some global restrictions to be placed on issuance
and avoids requiring ACME clients to have (at initial setup) access
to a Vault token other mechanism for acquiring a Vault EAB ACME token.
2. Use a more permissive [configuration with
`eab_policy=always-required`](/vault/api-docs/secret/pki#eab_policy)
to allow more roles and users to select the roles, but bind ACME clients
to a Vault token which can be suitably ACL'd to particular sets of
approved ACME directories.
The choice of approach depends on the policies of the organization wishing
to use ACME.
Another consequence of the Vault unauthenticated nature of ACME requests
are that role templating, based on entity information, cannot be used as
there is no token and thus no entity associated with the request, even when
EAB binding is used.
### ACME and the public internet
Using ACME is possible over the public internet; public CAs like Let's Encrypt
offer this as a service. Similarly, organizations running internal PKI
infrastructure might wish to issue server certificates to pieces of
infrastructure outside of their internal network boundaries, from a publicly
accessible Vault instance. By default, without enforcing a restrictive
`eab_policy`, this results in a complicated threat model: _any_ external
client which can prove possession of a domain can issue a certificate under
this CA, which might be considered more trusted by this organization.
As such, we strongly recommend publicly facing Vault instances (such as HCP
Vault) enforce that PKI mount operators have required a [restrictive
`eab_policy=always-required` configuration](/vault/api-docs/secret/pki#eab_policy).
System administrators of Vault instances can enforce this by [setting the
`VAULT_DISABLE_PUBLIC_ACME=true` environment
variable](/vault/api-docs/secret/pki#acme-external-account-bindings).
### ACME errors are in server logs
Because the ACME client is not necessarily trusted (as account registration
may not be tied to a valid Vault token when EAB is not used), many error
messages end up in the Vault server logs out of security necessity. When
troubleshooting issues with clients requesting certificates, first check
the client's logs, if any, (e.g., certbot will state the log location on
errors), and then correlate with Vault server logs to identify the failure
reason.
### ACME security considerations
ACME allows any client to use Vault to make some sort of external call;
while the design of ACME attempts to minimize this scope and will prohibit
issuance if incorrect servers are contacted, it cannot account for all
possible remote server implementations. Vault's ACME server makes three
types of requests:
1. DNS requests for `_acme-challenge.<domain>`, which should be least
invasive and most safe.
2. TLS ALPN requests for the `acme-tls/1` protocol, which should be
safely handled by the TLS before any application code is invoked.
3. HTTP requests to `http://<domain>/.well-known/acme-challenge/<token>`,
which could be problematic based on server design; if all requests,
regardless of path, are treated the same and assumed to be trusted,
this could result in Vault being used to make (invalid) requests.
Ideally, any such server implementations should be updated to ignore
such ACME validation requests or to block access originating from Vault
to this service.
In all cases, no information about the response presented by the remote
server is returned to the ACME client.
When running Vault on multiple networks, note that Vault's ACME server
places no restrictions on requesting client/destination identifier
validations paths; a client could use a HTTP challenge to force Vault to
reach out to a server on a network it could otherwise not access.
### ACME and client counting
In Vault 1.14, ACME contributes differently to usage metrics than other
interactions with the PKI Secrets Engine. Due to its use of unauthenticated
requests (which do not generate Vault tokens), it would not be counted in
the traditional [activity log APIs](/vault/api-docs/system/internal-counters#activity-export).
Instead, certificates issued via ACME will be counted via their unique
certificate identifiers (the combination of CN, DNS SANs, and IP SANs).
These will create a stable identifier that will be consistent across
renewals, other ACME clients, mounts, and namespaces, contributing to
the activity log presently as a non-entity token attributed to the first
mount which created that request.
## Keep certificate lifetimes short, for CRL's sake
This secrets engine aligns with Vault's philosophy of short-lived secrets. As
such it is not expected that CRLs will grow large; the only place a private key
is ever returned is to the requesting client (this secrets engine does _not_
store generated private keys, except for CA certificates). In most cases, if the
key is lost, the certificate can simply be ignored, as it will expire shortly.
If a certificate must truly be revoked, the normal Vault revocation function can
be used, and any revocation action will cause the CRL to be regenerated. When
the CRL is regenerated, any expired certificates are removed from the CRL (and
any revoked, expired certificate are removed from secrets engine storage). This
is an expensive operation! Due to the structure of the CRL standard, Vault must
read **all** revoked certificates into memory in order to rebuild the CRL and
clients must fetch the regenerated CRL.
This secrets engine does not support multiple CRL endpoints with sliding date
windows; often such mechanisms will have the transition point a few days apart,
but this gets into the expected realm of the actual certificate validity periods
issued from this secrets engine. A good rule of thumb for this secrets engine
would be to simply not issue certificates with a validity period greater than
your maximum comfortable CRL lifetime. Alternately, you can control CRL caching
behavior on the client to ensure that checks happen more often.
Often multiple endpoints are used in case a single CRL endpoint is down so that
clients don't have to figure out what to do with a lack of response. Run Vault
in HA mode, and the CRL endpoint should be available even if a particular node
is down.
~> Note: Since Vault 1.11.0, with multiple issuers in the same mount point,
different issuers may have different CRLs (depending on subject and key
material). This means that Vault may need to regenerate multiple CRLs.
This is again a rationale for keeping TTLs short and avoiding revocation
if possible.
~> Note: Since Vault 1.12.0, we support two complementary revocation
mechanisms: Delta CRLs, which allow for rebuilds of smaller, incremental
additions to the last complete CRL, and OCSP, which allows responding to
revocation status requests for individual certificates. When coupled with
the new CRL auto-rebuild functionality, this means that the revoking step
isn't as costly (as the CRL isn't always rebuilt on each revocation),
outside of storage considerations. However, while the rebuild operation
still can be expensive with lots of certificates, it will be done on a
schedule rather than on demand.
### NotAfter behavior on leaf certificates
In Vault 1.11.0, the PKI Secrets Engine has introduced a new
`leaf_not_after_behavior` [parameter on
issuers](/vault/api-docs/secret/pki#leaf_not_after_behavior).
This allows modification of the issuance behavior: should Vault `err`,
preventing issuance of a longer-lived leaf cert than issuer, silently
`truncate` to that of the issuer's `NotAfter` value, or `permit` longer
expirations.
It is strongly suggested to use `err` or `truncate` for intermediates;
`permit` is only useful for root certificates, as intermediate's NotAfter
expiration are checked when validating presented chains.
In combination with a cascading expiration with longer lived roots (perhaps
on the range of 2-10 years), shorter lived intermediates (perhaps on the
range of 6 months to 2 years), and short-lived leaf certificates (on the
range of 30 to 90 days), and the [rotation strategies discussed in other
sections](/vault/docs/secrets/pki/rotation-primitives), this should keep the
CRLs adequately small.
### Cluster performance and quantity of leaf certificates
As mentioned above, keeping TTLs short (or using `no_store=true` and
`no_store_cert_metadata=true`) and avoiding
leases is important for a healthy cluster. However it is important to note
this is a scale problem: 10-1000 long-lived, stored certificates are probably
fine, but 50k-100k become a problem and 500k+ stored, unexpired certificates
can negatively impact even large Vault clusters--even with short TTLs!
However, once these certificates are expired, a [tidy operation](/vault/api-docs/secret/pki#tidy)
will clean up CRLs and Vault cluster storage.
Note that organizational risk assessments for certificate compromise might
mean certain certificate types should always be issued with `no_store=false`;
even short-lived broad wildcard certificates (say, `*.example.com`) might be
important enough to have precise control over revocation. However, an internal
service with a well-scoped certificate (say, `service.example.com`) might be
of low enough risk to issue a 90-day TTL with `no_store=true`, preventing
the need for revocation in the unlikely case of compromise.
Having a shorter TTL decreases the likelihood of needing to revoke a cert
(but cannot prevent it entirely) and decrease the impact of any such
compromise.
~> Note: As of Vault 1.12, the PKI Secret Engine's [Bring-Your-Own-Cert
(BYOC)](/vault/api-docs/secret/pki#revoke-certificate)
functionality allows revocation of certificates not previously stored
(e.g., issued via a role with `no_store=true`). This means that setting
`no_store=true` _is now_ safe to be used globally, regardless of importance
of issued certificates (and their likelihood for revocation).
## You must configure issuing/CRL/OCSP information _in advance_
This secrets engine serves CRLs from a predictable location, but it is not
possible for the secrets engine to know where it is running. Therefore, you must
configure desired URLs for the issuing certificate, CRL distribution points, and
OCSP servers manually using the `config/urls` endpoint. It is supported to have
more than one of each of these by passing in the multiple URLs as a
comma-separated string parameter.
~> Note: when using Vault Enterprise's Performance Replication features with a
PKI Secrets Engine mount, each cluster will have its own CRL; this means
each cluster's unique CRL address should be included in the [AIA
information](https://datatracker.ietf.org/doc/html/rfc5280#section-5.2.7)
field separately, or the CRLs should be consolidated and served outside of
Vault.
~> Note: When using multiple issuers in the same mount, it is suggested to use
the per-issuer AIA fields rather than the global (`/config/urls`) variant.
This is for correctness: these fields are used for chain building and
automatic CRL detection in certain applications. If they point to the wrong
issuer's information, these applications may break.
## Distribution of CRLs and OCSP
Both CRLs and OCSP allow interrogating revocation status of certificates. Both
of these methods include internal security and authenticity (both CRLs and
OCSP responses are signed by the issuing CA within Vault). This means both are
fine to distribute over non-secure and non-authenticated channels, such as
HTTP.
~> Note: The OCSP implementation for GET requests can lead to intermittent
400 errors when an encoded OCSP request contains consecutive '/' characters.
Until this is resolved it is recommended to use POST based OCSP requests.
## Automate CRL building and tidying
Since Vault 1.12, the PKI Secrets Engine supports automated CRL rebuilding
(including optional Delta CRLs which can be built more frequently than
complete CRLs) via the `/config/crl` endpoint. Additionally, tidying of
revoked and expired certificates can be configured automatically via the
`/config/auto-tidy` endpoint. Both of these should be enabled to ensure
compatibility with the wider PKIX ecosystem and performance of the cluster.
## Spectrum of revocation support
Starting with Vault 1.13, the PKI secrets engine has the ability to support a
spectrum of cluster sizes and certificate revocation quantities.
For users with few revocations or who want a unified view and have the
inter-cluster bandwidth to support it, we recommend turning on auto
rebuilding of CRLs, cross-cluster revocation queues, and cross-cluster CRLs.
This allows all consumers of the CRLs to have the most accurate picture of
revocations, regardless of which cluster they talk to.
If the unified CRL becomes too big for the underlying storage mechanism or
for a single host to build, we recommend relying on OCSP instead of CRLs.
These have much smaller storage entries, and the CRL `disabled` flag is
independent of `unified_crls`, allowing unified OCSP to remain.
However, when cross-cluster traffic becomes too high (or if CRLs are still
necessary in addition to OCSP), we recommend sharding the CRL between
different clusters. This has been the default behavior of Vault, but with
the introduction of per-cluster, templated AIA information, the leaf
certificate's Authority Information Access (AIA) info will point directly
to the cluster which issued it, allowing the correct CRL for this cert to
be identified by the application. This more correctly mimics the behavior
of [Let's Encrypt's CRL sharding](https://letsencrypt.org/2022/09/07/new-life-for-crls.html).
This sharding behavior can also be used for OCSP, if the cross-cluster
traffic for revocation entries becomes too high.
For users who wish to manage revocation manually, using the audit logs to
track certificate issuance would allow an external system to identify which
certificates were issued. These can be manually tracked for revocation, and
a [custom CRL can be built](/vault/api-docs/secret/pki#combine-crls-from-the-same-issuer)
using externally tracked revocations. This would allow usage of roles set to
`no_store=true`, so Vault is strictly used as an issuing authority and isn't
storing any certificates, issued or revoked. For the highest of revocation
volumes, this could be the best option.
Notably, this last approach can either be used for the creation of externally
stored unified or sharded CRLs. If a single external unified CRL becomes
unreasonably large, each cluster's certificates could have AIA info point
to an externally stored and maintained, sharded CRL. However,
Vault has no mechanism to sign OCSP requests at this time.
### What are Cross-Cluster CRLs?
Vault Enterprise supports a clustering mode called [Performance
Replication](/vault/docs/enterprise/replication#performance-replication). In
a replicated PKI Secrets Engine mount, issuer and role information is synced
between the Performance Primary and all Performance Secondary clusters.
However, each Performance Secondary cluster has its own local storage of
issued certificates and revocations which is not synced. In Vault versions
before 1.13, this meant that each of these clusters had its own CRL and
OCSP data, and any revocation requests needed to be processed on the
cluster that issued it (or BYOC used).
Starting with Vault 1.13, we've added [two
features](/vault/api-docs/secret/pki#read-crl-configuration) to Vault
Enterprise to help manage this setup more correctly and easily: revocation
request queues (`cross_cluster_revocation=true` in `config/crl`) and unified
revocation entries (`unified_crl=true` in `config/crl`).
The former allows operators (revoking by serial number) to request a
certificate be revoked regardless of which cluster it was issued on. For
example, if a request goes into the Performance Primary, but it didn't
issue the certificate, it'll write a cross-cluster revocation request,
and mark the results as pending. If another cluster already has this
certificate in storage, it will revoke it and confirm the revocation back
to the main cluster. An operator can [list pending
revocations](/vault/api-docs/secret/pki#list-revocation-requests) to see
the status of these requests. To clean up invalid requests (e.g., if the
cluster which had that certificate disappeared, if that certificate was
issued with `no_store=true` on the role, or if it was an invalid serial
number), an operator can [use tidy](/vault/api-docs/secret/pki#tidy) with
`tidy_revocation_queue=true`, optionally shortening
`revocation_queue_safety_buffer` to remove them quicker.
The latter allows all clusters to have a unified view of revocations,
that is, to have access to a list of revocations performed by other clusters.
While the configuration parameter includes `crl` in the description, this
applies to [both CRLs](/vault/api-docs/secret/pki#read-issuer-crl) and the
[OCSP responder](/vault/api-docs/secret/pki#ocsp-request). When this
revocation replication occurs, if any cluster considers a cert revoked when
another doesn't (e.g., via BYOC revocation of a `no_store=false` certificate),
all clusters will now consider it revoked assuming it hasn't expired. Notably,
the active node of the primary cluster will be used to rebuild the CRL; as
this can grow large if many clusters have lots of revoked certs, an operator
might need to disable CRL building (`disabled=true` in `config/crl`) or
increase the [storage size](/vault/docs/configuration/storage/raft#max_entry_size).
As an aside, all new cross-cluster writes (from Performance Secondary up to
the Performance Primary) are performed synchronously. This gives the caller
confidence that the request actually went through, at the expense of incurring
a bit higher overhead for revoking certificates. When a node loses its GRPC
connection (e.g., during leadership election or being otherwise unable to
contact the active primary), errors will occur though the local portion of the
write (if any) will still succeed. For cross-cluster revocation requests, due
to there being no local write, this means that the operation will need to be
retried, but in the event of an issue writing a cross-cluster revocation entry
when the cert existed locally, the revocation will eventually be synced across
clusters when the connection comes back.
## Issuer subjects and CRLs
As noted on several [GitHub issues](https://github.com/hashicorp/vault/issues/10176),
Go's x509 library has an opinionated parsing and structuring mechanism for
certificate's Subjects. Issuers created within Vault are fine, but when using
externally created CA certificates, these may not be parsed
correctly throughout all parts of the PKI. In particular, CRLs embed a
(modified) copy of the issuer name. This can be avoided by using OCSP to
track revocation, but note that performance characteristics are different
between OCSP and CRLs.
~> Note: As of Go 1.20 and Vault 1.13, Go correctly formats the CRL's issuer
name and this notice [does not apply](https://github.com/golang/go/commit/a367981b4c8e3ae955eca9cc597d9622201155f3).
## Automate leaf certificate renewal
To manage certificates for services at scale, it is best to automate the
certificate renewal as much as possible. Vault Agent [has support for
automatically renewing requested certificates](/vault/docs/agent-and-proxy/agent/template#certificates)
based on the `validTo` field. Other solutions might involve using
[cert-manager](https://cert-manager.io/) in Kubernetes or OpenShift, backed
by the Vault CA.
## Safe minimums
Since its inception, this secrets engine has enforced SHA256 for signature
hashes rather than SHA1. As of 0.5.1, a minimum of 2048 bits for RSA keys is
also enforced. Software that can handle SHA256 signatures should also be able to
handle 2048-bit keys, and 1024-bit keys are considered unsafe and are disallowed
in the Internet PKI.
## Token lifetimes and revocation
When a token expires, it revokes all leases associated with it. This means that
long-lived CA certs need correspondingly long-lived tokens, something that is
easy to forget. Starting with 0.6, root and intermediate CA certs no longer have
associated leases, to prevent unintended revocation when not using a token with
a long enough lifetime. To revoke these certificates, use the `pki/revoke`
endpoint.
## Safe usage of roles
The Vault PKI Secrets Engine supports many options to limit issuance via
[Roles](/vault/api-docs/secret/pki#create-update-role).
Careful consideration of construction is necessary to ensure that more
permissions are not given than necessary. Additionally, roles should generally
do _one_ thing; multiple roles should be preferable over having too permissive
roles that allow arbitrary issuance (e.g., `allow_any_name` should generally
be used sparingly, if at all).
- `allow_any_name` should generally be set to `false`; this is the default.
- `allow_localhost` should generally be set to `false` for production
services, unless listening on `localhost` is expected.
- Unless necessary, `allow_wildcard_certificates` should generally be set to
`false`. This is **not** the default due to backwards compatibility
concerns.
- This is especially necessary when `allow_subdomains` or `allow_glob_domains`
are enabled.
- `enforce_hostnames` should generally be enabled for TLS services; this is
the default.
- `allow_ip_sans` should generally be set to `false` (but defaults to `true`),
unless IP address certificates are explicitly required.
- When using short TTLs (< 30 days) or with high issuance volume, it is
generally recommend to set `no_store` to `true` (defaults to `false`).
This prevents serial number based revocation, but allows higher throughput
as Vault no longer needs to store every issued certificate. This is discussed
more in the [Replicated Datasets](#replicated-datasets) section below.
- Do not use roles with root certificates (`issuer_ref`). Root certificates
should generally only issue intermediates (see the section on [CA hierarchy
above](#use-a-ca-hierarchy)), which doesn't rely on roles.
- Limit `key_usage` and `ext_key_usage`; don't attempt to allow all usages
for all purposes. Generally the default values are useful for client and
server TLS authentication.
## Telemetry
Beyond Vault's default telemetry around request processing, PKI exposes count and
duration metrics for the issue, sign, sign-verbatim, and revoke calls. The
metrics keys take the form `mount-path,operation,[failure]` with labels for
namespace and role name.
Note that these metrics are per-node and thus would need to be aggregated across
nodes and clusters.
## Auditing
Because Vault HMACs audit string keys by default, it is necessary to tune
PKI secrets mounts to get an accurate view of issuance that is occurring under
this mount.
~> Note: Depending on usage of Vault, CRLs (and rarely, CA chains) can grow to
be rather large. We don't recommend un-HMACing the `crl` field for this
reason, but note that the recommendations below suggest to un-HMAC the
`certificate` response parameter, which the CRL can be served in via
the `/pki/cert/crl` API endpoint. Additionally, the `http_raw_body` can
be used to return CRL both in PEM and raw binary DER form, so it is
suggested not to un-HMAC that field to not corrupt the log format.<br /><br />
If this is done with only a [syslog](/vault/docs/audit/syslog) audit device,
Vault can deny requests (with an opaque `500 Internal Error` message)
after the action has been performed on the server, because it was
unable to log the message.<br /><br />
The suggested workaround is to either leave the `certificate` and `crl`
response fields HMACed and/or to also enable the [`file`](/vault/docs/audit/file)
audit log type.
Some suggested keys to un-HMAC for requests are as follows:
- `csr` - the requested CSR to sign,
- `certificate` - the requested self-signed certificate to re-sign or
when importing issuers,
- Various issuance-related overriding parameters, such as:
- `issuer_ref` - the issuer requested to sign this certificate,
- `common_name` - the requested common name,
- `alt_names` - alternative requested DNS-type SANs for this certificate,
- `other_sans` - other (non-DNS, non-Email, non-IP, non-URI) requested SANs for this certificate,
- `ip_sans` - requested IP-type SANs for this certificate,
- `uri_sans` - requested URI-type SANs for this certificate,
- `ttl` - requested expiration date of this certificate,
- `not_after` - requested expiration date of this certificate,
- `serial_number` - the subject's requested serial number,
- `key_type` - the requested key type,
- `private_key_format` - the requested key format which is also
used for the public certificate format as well,
- Various role- or issuer-related generation parameters, such as:
- `managed_key_name` - when creating an issuer, the requested managed
key name,
- `managed_key_id` - when creating an issuer, the requested managed
key identifier,
- `ou` - the subject's organizational unit,
- `organization` - the subject's organization,
- `country` - the subject's country code,
- `locality` - the subject's locality,
- `province` - the subject's province,
- `street_address` - the subject's street address,
- `postal_code` - the subject's postal code,
- `permitted_dns_domains` - permitted DNS domains,
- `policy_identifiers` - the requested policy identifiers when creating a role, and
- `ext_key_usage_oids` - the extended key usage OIDs for the requested certificate.
Some suggested keys to un-HMAC for responses are as follows:
- `certificate` - the certificate that was issued,
- `issuing_ca` - the certificate of the CA which issued the requested
certificate,
- `serial_number` - the serial number of the certificate that was issued,
- `error` - to show errors associated with the request, and
- `ca_chain` - optional due to noise; the full CA chain of the issuer of
the requested certificate.
~> Note: These list of parameters to un-HMAC are provided as a suggestion and
may not be exhaustive.
The following keys are suggested **NOT** to un-HMAC, due to their sensitive
nature:
- `private_key` - this response parameter contains the private keys
generated by Vault during issuance, and
- `pem_bundle` this request parameter is only used on the issuer-import
paths and may contain sensitive private key material.
## Role-Based access
Vault supports [path-based ACL Policies](/vault/tutorials/getting-started/getting-started-policies)
for limiting access to various paths within Vault.
The following is a condensed example reference of ACLing the PKI Secrets
Engine. These are just a suggestion; other personas and policy approaches
may also be valid.
We suggest the following personas:
- *Operator*; a privileged user who manages the health of the PKI
subsystem; manages issuers and key material.
- *Agent*; a semi-privileged user that manages roles and handles
revocation on behalf of an operator; may also handle delegated
issuance. This may also be called an *administrator* or *role
manager*.
- *Advanced*; potentially a power-user or service that has access to
additional issuance APIs.
- *Requester*; a low-level user or service that simply requests certificates.
- *Unauthed*; any arbitrary user or service that lacks a Vault token.
For these personas, we suggest the following ACLs, in condensed, tabular form:
| Path | Operations | Operator | Agent | Advanced | Requester | Unauthed |
| :--- | :--------- | :------- | :---- | :------- | :-------- | :------- |
| `/ca(/pem)?` | Read | Yes | Yes | Yes | Yes | Yes |
| `/ca_chain` | Read | Yes | Yes | Yes | Yes | Yes |
| `/crl(/pem)?` | Read | Yes | Yes | Yes | Yes | Yes |
| `/crl/delta(/pem)?` | Read | Yes | Yes | Yes | Yes | Yes |
| `/cert/:serial(/raw(/pem)?)?` | Read | Yes | Yes | Yes | Yes | Yes |
| `/issuers` | List | Yes | Yes | Yes | Yes | Yes |
| `/issuer/:issuer_ref/(json¦der¦pem)` | Read | Yes | Yes | Yes | Yes | Yes |
| `/issuer/:issuer_ref/crl(/der¦/pem)?` | Read | Yes | Yes | Yes | Yes | Yes |
| `/issuer/:issuer_ref/crl/delta(/der¦/pem)?` | Read | Yes | Yes | Yes | Yes | Yes |
| `/ocsp/<request>` | Read | Yes | Yes | Yes | Yes | Yes |
| `/ocsp` | Write | Yes | Yes | Yes | Yes | Yes |
| `/certs` | List | Yes | Yes | Yes | Yes | |
| `/revoke-with-key` | Write | Yes | Yes | Yes | Yes | |
| `/roles` | List | Yes | Yes | Yes | Yes | |
| `/roles/:role` | Read | Yes | Yes | Yes | Yes | |
| `/(issue¦sign)/:role` | Write | Yes | Yes | Yes | Yes | |
| `/issuer/:issuer_ref/(issue¦sign)/:role` | Write | Yes | Yes | Yes | | |
| `/config/auto-tidy` | Read | Yes | Yes | | | |
| `/config/ca` | Read | Yes | Yes | | | |
| `/config/crl` | Read | Yes | Yes | | | |
| `/config/issuers` | Read | Yes | Yes | | | |
| `/crl/rotate` | Read | Yes | Yes | | | |
| `/crl/rotate-delta` | Read | Yes | Yes | | | |
| `/roles/:role` | Write | Yes | Yes | | | |
| `/issuer/:issuer_ref` | Read | Yes | Yes | | | |
| `/sign-verbatim(/:role)?` | Write | Yes | Yes | | | |
| `/issuer/:issuer_ref/sign-verbatim(/:role)?` | Write | Yes | Yes | | | |
| `/revoke` | Write | Yes | Yes | | | |
| `/tidy` | Write | Yes | Yes | | | |
| `/tidy-cancel` | Write | Yes | Yes | | | |
| `/tidy-status` | Read | Yes | Yes | | | |
| `/config/auto-tidy` | Write | Yes | | | | |
| `/config/ca` | Write | Yes | | | | |
| `/config/crl` | Write | Yes | | | | |
| `/config/issuers` | Write | Yes | | | | |
| `/config/keys` | Read, Write | Yes | | | | |
| `/config/urls` | Read, Write | Yes | | | | |
| `/issuer/:issuer_ref` | Write | Yes | | | | |
| `/issuer/:issuer_ref/revoke` | Write | Yes | | | | |
| `/issuer/:issuer_ref/sign-intermediate` | Write | Yes | | | | |
| `/issuer/issuer_ref/sign-self-issued` | Write | Yes | | | | |
| `/issuers/generate/+/+` | Write | Yes | | | | |
| `/issuers/import/+` | Write | Yes | | | | |
| `/intermediate/generate/+` | Write | Yes | | | | |
| `/intermediate/cross-sign` | Write | Yes | | | | |
| `/intermediate/set-signed` | Write | Yes | | | | |
| `/keys` | List | Yes | | | | |
| `/key/:key_ref` | Read, Write | Yes | | | | |
| `/keys/generate/+` | Write | Yes | | | | |
| `/keys/import` | Write | Yes | | | | |
| `/root/generate/+` | Write | Yes | | | | |
| `/root/sign-intermediate` | Write | Yes | | | | |
| `/root/sign-self-issued` | Write | Yes | | | | |
| `/root/rotate/+` | Write | Yes | | | | |
| `/root/replace` | Write | Yes | | | | |
~> Note: With managed keys, operators might need access to [read the mount
point's tunable data](/vault/api-docs/system/mounts) (Read on `/sys/mounts`) and
may need access [to use or manage managed keys](/vault/api-docs/system/managed-keys).
## Replicated DataSets
When operating with [Performance Secondary](/vault/docs/enterprise/replication#architecture)
clusters, certain data-sets are maintained across all clusters, while others for performance
and scalability reasons are kept within a given cluster.
The following table breaks down by data type what data sets will cross the cluster boundaries.
For data-types that do not cross a cluster boundary, read requests for that data will need to be
sent to the appropriate cluster that the data was generated on.
| Data Set | Replicated Across Clusters |
|--------------------------|----------------------------|
| Issuers & Keys | Yes |
| Roles | Yes |
| CRL Config | Yes |
| URL Config | Yes |
| Issuer Config | Yes |
| Key Config | Yes |
| CRL | No |
| Revoked Certificates | No |
| Leaf/Issued Certificates | No |
| Certificate Metadata | No |
The main effect is that within the PKI secrets engine leaf certificates
issued with `no_store` set to `false` are stored local to the cluster that issued them.
This allows for both primary and [Performance Secondary](/vault/docs/enterprise/replication#architecture)
clusters' active node to issue certificates for greater scalability. As a
result, these certificates, metadata and any revocations are visible only on the issuing
cluster. This additionally means each cluster has its own set of CRLs, distinct
from other clusters. These CRLs should either be unified into a single CRL for
distribution from a single URI, or server operators should know to fetch all
CRLs from all clusters.
## Cluster scalability
Most non-introspection operations in the PKI secrets engine require a write to
storage, and so are forwarded to the cluster's active node for execution.
This table outlines which operations can be executed on performance standby nodes
and thus scale horizontally across all nodes within a cluster.
| Path | Operations |
|-------------------------------|----------------------|
| ca[/pem] | Read |
| cert/<em>serial-number</em> | Read |
| cert/ca_chain | Read |
| config/crl | Read |
| certs | List |
| ca_chain | Read |
| crl[/pem] | Read |
| issue | Update <sup>\*</sup> |
| revoke/<em>serial-number</em> | Read |
| sign | Update <sup>\*</sup> |
| sign-verbatim | Update <sup>\*</sup> |
\* Only if the corresponding role has `no_store` set to true, `generate_lease`
set to false and no metadata is being written. If `generate_lease` is true the
lease creation will be forwarded to the active node; if `no_store` is false
the entire request will be forwarded to the active node.
If `no_store_cert_metadata=false` and `metadata` argument is provided the entire
request will be forwarded to the active node.
## PSS support
Go lacks support for PSS certificates, keys, and CSRs using the `rsaPSS` OID
(`1.2.840.113549.1.1.10`). It requires all RSA certificates, keys, and CSRs
to use the alternative `rsaEncryption` OID (`1.2.840.113549.1.1.1`).
When using OpenSSL to generate CAs or CSRs from PKCS8-encoded PSS keys, the
resulting CAs and CSRs will have the `rsaPSS` OID. Go and Vault will reject
them. Instead, use OpenSSL to generate or convert to a PKCS#1v1.5 private
key file and use this to generate the CSR. Vault will, depending on the role
and the signing mechanism, still use a PSS signature despite the
`rsaEncryption` OID on the request as the SubjectPublicKeyInfo and
SignatureAlgorithm fields are orthogonal. When creating an external CA and
importing it into Vault, ensure that the `rsaEncryption` OID is present on
the SubjectPublicKeyInfo field even if the SignatureAlgorithm is PSS-based.
These certificates generated by Go (with `rsaEncryption` OID but PSS-based
signatures) are otherwise compatible with the fully PSS-based certificates.
OpenSSL and NSS support parsing and verifying chains using this type of
certificate. Note that some TLS implementations may not support these types
of certificates if they do not support `rsa_pss_rsae_*` signature schemes.
Additionally, some implementations allow rsaPSS OID certificates to contain
restrictions on signature parameters allowed by this certificate, but Go and
Vault do not support adding such restrictions.
At this time Go lacks support for signing CSRs with the PSS signature
algorithm. If using a managed key that requires a RSA PSS algorithm (such as GCP or
a PKCS#11 HSM) as a backing for an intermediate CA key, attempting to generate
a CSR (via `pki/intermediate/generate/kms`) will fail signature verification.
In this case, the CSR will need to be generated outside of Vault and the
signed final certificate can be imported into the mount.
Go additionally lacks support for creating OCSP responses with the PSS
signature algorithm. Vault will automatically downgrade issuers with
PSS-based revocation signature algorithms to PKCS#1v1.5, but note that
certain KMS devices (like HSMs and GCP) may not support this with the
same key. As a result, the OCSP responder may fail to sign responses,
returning an internal error.
## Issuer storage migration issues
When Vault migrates to the new multi-issuer storage layout on releases prior
to 1.11.6, 1.12.2, and 1.13, and storage write errors occur during the mount
initialization and storage migration process, the default issuer _may_ not
have the correct `ca_chain` value and may only have the self-reference. These
write errors most commonly manifest in logs as a message like
`failed to persist issuer ... chain to disk: <cause>` and indicate that Vault
was not stable at the time of migration. Note that this only occurs when more
than one issuer exists within the mount (such as an intermediate with root).
To fix this manually (until a new version of Vault automatically rebuilds the
issuer chain), a rebuild of the chains can be performed:
```
curl -X PATCH -H "Content-Type: application/merge-patch+json" -H "X-Vault-Request: true" -H "X-Vault-Token: $(vault print token)" -d '{"manual_chain":"self"}' https://.../issuer/default
curl -X PATCH -H "Content-Type: application/merge-patch+json" -H "X-Vault-Request: true" -H "X-Vault-Token: $(vault print token)" -d '{"manual_chain":""}' https://.../issuer/default
```
This temporarily sets the manual chain on the default issuer to a self-chain
only, before reverting it back to automatic chain building. This triggers a
refresh of the `ca_chain` field on the issuer, and can be verified with:
```
vault read pki/issuer/default
```
## Issuer Constraints Enforcement
Starting with versions 1.18.3, 1.18.3+ent, 1.17.10+ent and 1.16.14+ent, Vault
performs additional verifications when creating or signing leaf certificates for
issuers that have constraints extensions. This verification includes validating
extended key usage, name constraints, and correct copying of the issuer name
onto the certificate. Certificates issued without this verification might not be
accepted by end user applications.
Problems with issuance arising from this validation should be fixed by changing
the issuer certificate itself, to avoid more problems down the line.
It is possible to completely disable verification by setting environment
variable `VAULT_DISABLE_PKI_CONSTRAINTS_VERIFICATION` to `true`.
~> **Warning**: The use of environment variable `VAULT_DISABLE_PKI_CONSTRAINTS_VERIFICATION`
should be considered as a last resort.
## Tutorial
Refer to the [Build Your Own Certificate Authority (CA)](/vault/tutorials/secrets-management/pki-engine)
guide for a step-by-step tutorial.
Have a look at the [PKI Secrets Engine with Managed Keys](/vault/tutorials/enterprise/managed-key-pki)
for more about how to use externally managed keys with PKI.
## API
The PKI secrets engine has a full HTTP API. Please see the
[PKI secrets engine API](/vault/api-docs/secret/pki) for more
details. | vault | layout docs page title PKI Secrets Engines Considerations description The PKI secrets engine for Vault generates TLS certificates PKI secrets engine considerations To successfully deploy this secrets engine there are a number of important considerations to be aware of as well as some preparatory steps that should be undertaken You should read all of these before using this secrets engine or generating the CA to use with this secrets engine Table of contents Be Careful with Root CAs be careful with root cas Managed Keys managed keys One CA Certificate One Secrets Engine one ca certificate one secrets engine Always Configure a Default Issuer always configure a default issuer Key Types Matter key types matter Cluster Performance and Key Types cluster performance and key types Use a CA Hierarchy use a ca hierarchy Cross Signed Intermediates cross signed intermediates Cluster URLs are Important cluster urls are important Automate Rotation with ACME automate rotation with acme ACME Stores Certificates acme stores certificates ACME Role Restrictions Require EAB acme role restrictions require eab ACME and the Public Internet acme and the public internet ACME Errors are in Server Logs acme errors are in server logs ACME Security Considerations acme security considerations ACME and Client Counting acme and client counting Keep Certificate Lifetimes Short For CRL s Sake keep certificate lifetimes short for crls sake NotAfter Behavior on Leaf Certificates notafter behavior on leaf certificates Cluster Performance and Quantity of Leaf Certificates cluster performance and quantity of leaf certificates You must configure issuing CRL OCSP information in advance you must configure issuingcrlocsp information in advance Distribution of CRLs and OCSP distribution of crls and ocsp Automate CRL Building and Tidying automate crl building and tidying Spectrum of Revocation Support spectrum of revocation support What Are Cross Cluster CRLs what are cross cluster crls Issuer Subjects and CRLs issuer subjects and crls Automate Leaf Certificate Renewal automate leaf certificate renewal Safe Minimums safe minimums Token Lifetimes and Revocation token lifetimes and revocation Safe Usage of Roles safe usage of roles Telemetry telemetry Auditing auditing Role Based Access role based access Replicated DataSets replicated datasets Cluster Scalability cluster scalability PSS Support pss support Issuer Storage Migration Issues issuer storage migration issues Issuer Constraints Enforcement issuer constraints enforcement Tutorial tutorial API api Be careful with root CAs Vault storage is secure but not as secure as a piece of paper in a bank vault It is after all networked software If your root CA is hosted outside of Vault don t put it in Vault as well instead issue a shorter lived intermediate CA certificate and put this into Vault This aligns with industry best practices Since 0 4 the secrets engine supports generating self signed root CAs and creating and signing CSRs for intermediate CAs In each instance for security reasons the private key can only be exported at generation time and the ability to do so is part of the command path so it can be put into ACL policies If you plan on using intermediate CAs with Vault it is suggested that you let Vault create CSRs and do not export the private key then sign those with your root CA which may be a second mount of the pki secrets engine Managed keys Since 1 10 Vault Enterprise can access private key material in a managed key vault docs enterprise managed keys In this case Vault never sees the private key and the external KMS or HSM performs certificate signing operations Managed keys are configured by selecting the kms type when generating a root or intermediate One CA certificate one secrets engine Since Vault 1 11 0 the PKI Secrets Engine supports multiple issuers in a single mount However in order to simplify the configuration it is strongly recommended that operators limit a mount to a single issuer If you want to issue certificates from multiple disparate CAs mount the PKI secrets engine at multiple mount points with separate CA certificates in each The rationale for separating mounts is to simplify permissions management very few individuals need access to perform operations with the root but many need access to create leaves The operations on a root should generally be limited to issuing and revoking intermediate CAs which is a highly privileged operation it becomes much easier to audit these operations when they re in a separate mount than if they re mixed in with day to day leaf issuance A common pattern is to have one mount act as your root CA and to use this CA only to sign intermediate CA CSRs from other PKI secrets engines To keep old CAs active there s two approaches to achieving rotation 1 Use multiple secrets engines This allows a fresh start preserving the old issuer and CRL Vault ACL policy can be updated to deny new issuance under the old mount point and roles can be re evaluated before being imported into the new mount point 2 Use multiple issuers in the same mount point The usage of the old issuer can be restricted to CRL signing and existing roles and ACL policy can be kept as is This allows cross signing within the same mount and consumers of the mount won t have to update their configuration Once the transitional period for this rotation has completed and all past issued certificate have expired it is encouraged to fully remove the old issuer and any unnecessary cross signed issuers from the mount point Another suggested use case for multiple issuers in the same mount is splitting issuance by TTL lifetime For short lived certificates an intermediate stored in Vault will often out perform a HSM backed intermediate For longer lived certificates however it is often important to have the intermediate key material secured throughout the lifetime of the end entity certificate This means that two intermediates in the same mount one backed by the HSM and one backed by Vault can satisfy both use cases Operators can make roles setting maximum TTLs for each issuer and consumers of the mount can decide which to use Always configure a default issuer For backwards compatibility the default issuer vault api docs secret pki read issuers configuration is used to service PKI endpoints without an explicit issuer either via path selection or role based selection When certificates are revoked and their issuer is no longer part of this PKI mount Vault places them on the default issuer s CRL This means maintaining a default issuer is important for both backwards compatibility for issuing certificates and for ensuring revoked certificates land on a CRL Key types matter Certain key types have impacts on performance Signing certificates from a RSA key will be slower than issuing from an ECDSA or Ed25519 key Key generation using issue role endpoints using RSA keys will also be slow RSA key generation involves finding suitable random primes whereas Ed25519 keys can be random data As the number of bits goes up RSA 2048 4096 or ECDSA P 256 P 521 signature times also increases This matters in both directions not only is issuance more expensive but validation of the corresponding signature in say TLS handshakes will also be more expensive Careful consideration of both issuer and issued key types can have meaningful impacts on performance of not only Vault but systems using these certificates Cluster performance and key types The benchmark vault https github com hashicorp vault benchmark project can be used to measure the performance of a Vault PKI instance In general some considerations to be aware of RSA key generation is much slower and highly variable than EC key generation If performance and throughput are a necessity consider using EC keys including NIST P curves and Ed25519 instead of RSA Key signing requests via pki sign will be faster than pki issue especially for RSA keys this removes the necessity for Vault to generate key material and can sign the key material provided by the client This signing step is common between both endpoints so key generation is pure overhead if the client has a sufficiently secure source of entropy The CA s key type matters as well using a RSA CA will result in a RSA signature and takes longer than a ECDSA or Ed25519 CA Storage is an important factor with BYOC Revocation vault api docs secret pki revoke certificate using no store true still gives you the ability to revoke certificates and audit logs can be used to track issuance Clusters using a remote storage like Consul over a slow network and using no store false or no store cert metadata false along with specifying metadata on issuance will result in additional latency on issuance Adding leases for every issued certificate compounds the problem Storing too many certificates results in longer LIST pki certs time including the time to tidy the instance As such for large scale deployments 250k active certificates it is recommended to use audit logs to track certificates outside of Vault As a general comparison on unspecified hardware using benchmark vault for 30s on a local single node raft backed Vault instance Vault can issue 300k certificates using EC P 256 for CA leaf keys and without storage But switching to storing these leaves drops that number to 65k and only 20k with leases Using large expensive RSA 4096 bit keys Vault can only issue 160 leaves regardless of whether or not storage or leases were used The 95 key generation time is above 10s In comparison using P 521 keys Vault can issue closer to 30k leaves without leases and 18k with leases These numbers are for example only to represent the impact different key types can have on PKI cluster performance The use of ACME adds additional latency into these numbers both because certificates need to be stored and because challenge validation needs to be performed Use a CA hierarchy It is generally recommended to use a hierarchical CA setup with a root certificate which issues one or more intermediates based on usage which in turn issue the leaf certificates This allows stronger storage or policy guarantees around protection of the root CA be careful with root cas while letting Vault manage the intermediate CAs and issuance of leaves Different intermediates might be issued for different usage such as VPN signing Email signing or testing versus production TLS services This helps to keep CRLs limited to specific purposes for example VPN services don t care about the revoked set of email signing certificates if they re using separate certificates and different intermediates and thus don t need both CRL contents Additionally this allows higher risk intermediates such as those issuing longer lived email signing certificates to have HSM backing without impacting the performance of easier to rotate intermediates and certificates such as TLS intermediates Vault supports the use of both the allowed domains parameter on Roles vault api docs secret pki allowed domains and the permitted dns domains parameter to set the Name Constraints extension vault api docs secret pki permitted dns domains on root and intermediate generation This allows for several layers of separation of concerns between TLS based services Cross Signed intermediates When cross signing intermediates from two separate roots two separate intermediate issuers will exist within the Vault PKI mount In order to correctly serve the cross signed chain on issuance requests the manual chain override is required on either or both intermediates This can be constructed in the following order this issuer self this root the other copy of this intermediate the other root All requests to this issuer for signing will now present the full cross signed chain Cluster URLs are important In Vault 1 13 support for templated AIA URLs vault api docs secret pki enable aia url templating 1 was added With the per cluster URL configuration vault api docs secret pki set cluster configuration pointing to this Performance Replication cluster AIA information will point to the cluster that issued this certificate automatically In Vault 1 14 with ACME support the same configuration is used for allowing ACME clients to discover the URL of this cluster Warning It is important to ensure that this configuration is up to date and maintained correctly always pointing to the node s PR cluster address which may be a Load Balanced or a DNS Round Robbin address If this configuration is not set on every Performance Replication cluster certificate issuance via REST and or via ACME will fail Automate rotation with ACME In Vault 1 14 support for the Automatic Certificate Management Environment ACME https datatracker ietf org doc html rfc8555 protocol has been added to the PKI Engine This is a standardized way to handle validation issuance rotation and revocation of server certificates Many ecosystems from web servers like Caddy Nginx and Apache to orchestration environments like Kubernetes via cert manager natively support issuance via the ACME protocol For deployments without native support stand alone tools like certbot support fetching and renewing certificates on behalf of consumers Vault s PKI Engine only includes server support for ACME no client functionality has been included Note Vault s PKI ACME server caps the certificate s validity at 90 days maximum by default overridable using the ACME config max ttl parameter Shorter validity durations can be set via limiting the role s TTL to be under the global ACME configured limit Aligning with Let s Encrypt we do not support the optional NotBefore and NotAfter order request parameters ACME stores certificates Because ACME requires stored certificates in order to function the notes below about automating tidy automate crl building and tidying are especially important for the long term health of the PKI cluster ACME also introduces additional resource types accounts orders authorizations and challenges that must be tidied via the tidy acme true option vault api docs secret pki tidy Orders authorizations and challenges are cleaned up based on the safety buffer vault api docs secret pki safety buffer parameter but accounts can live longer past their last issued certificate by controlling the acme account safety buffer parameter vault api docs secret pki acme account safety buffer As a consequence of the above and like the discussions in the Cluster Scalability cluster scalability section because these roles have no store false set ACME can only issue certificates on the active nodes of PR clusters standby nodes if contacted will transparently forward all requests to the active node ACME role restrictions require EAB Because ACME by default has no external authorization engine and is unauthenticated from a Vault perspective the use of roles with ACME in the default configuration are of limited value as any ACME client can request certificates under any role by proving possession of the requested certificate identifiers To solve this issue there are two possible approaches 1 Use a restrictive allowed roles allowed issuers and default directory policy ACME configuration vault api docs secret pki set acme configuration to let only a single role and issuer be used This prevents user choice allowing some global restrictions to be placed on issuance and avoids requiring ACME clients to have at initial setup access to a Vault token other mechanism for acquiring a Vault EAB ACME token 2 Use a more permissive configuration with eab policy always required vault api docs secret pki eab policy to allow more roles and users to select the roles but bind ACME clients to a Vault token which can be suitably ACL d to particular sets of approved ACME directories The choice of approach depends on the policies of the organization wishing to use ACME Another consequence of the Vault unauthenticated nature of ACME requests are that role templating based on entity information cannot be used as there is no token and thus no entity associated with the request even when EAB binding is used ACME and the public internet Using ACME is possible over the public internet public CAs like Let s Encrypt offer this as a service Similarly organizations running internal PKI infrastructure might wish to issue server certificates to pieces of infrastructure outside of their internal network boundaries from a publicly accessible Vault instance By default without enforcing a restrictive eab policy this results in a complicated threat model any external client which can prove possession of a domain can issue a certificate under this CA which might be considered more trusted by this organization As such we strongly recommend publicly facing Vault instances such as HCP Vault enforce that PKI mount operators have required a restrictive eab policy always required configuration vault api docs secret pki eab policy System administrators of Vault instances can enforce this by setting the VAULT DISABLE PUBLIC ACME true environment variable vault api docs secret pki acme external account bindings ACME errors are in server logs Because the ACME client is not necessarily trusted as account registration may not be tied to a valid Vault token when EAB is not used many error messages end up in the Vault server logs out of security necessity When troubleshooting issues with clients requesting certificates first check the client s logs if any e g certbot will state the log location on errors and then correlate with Vault server logs to identify the failure reason ACME security considerations ACME allows any client to use Vault to make some sort of external call while the design of ACME attempts to minimize this scope and will prohibit issuance if incorrect servers are contacted it cannot account for all possible remote server implementations Vault s ACME server makes three types of requests 1 DNS requests for acme challenge domain which should be least invasive and most safe 2 TLS ALPN requests for the acme tls 1 protocol which should be safely handled by the TLS before any application code is invoked 3 HTTP requests to http domain well known acme challenge token which could be problematic based on server design if all requests regardless of path are treated the same and assumed to be trusted this could result in Vault being used to make invalid requests Ideally any such server implementations should be updated to ignore such ACME validation requests or to block access originating from Vault to this service In all cases no information about the response presented by the remote server is returned to the ACME client When running Vault on multiple networks note that Vault s ACME server places no restrictions on requesting client destination identifier validations paths a client could use a HTTP challenge to force Vault to reach out to a server on a network it could otherwise not access ACME and client counting In Vault 1 14 ACME contributes differently to usage metrics than other interactions with the PKI Secrets Engine Due to its use of unauthenticated requests which do not generate Vault tokens it would not be counted in the traditional activity log APIs vault api docs system internal counters activity export Instead certificates issued via ACME will be counted via their unique certificate identifiers the combination of CN DNS SANs and IP SANs These will create a stable identifier that will be consistent across renewals other ACME clients mounts and namespaces contributing to the activity log presently as a non entity token attributed to the first mount which created that request Keep certificate lifetimes short for CRL s sake This secrets engine aligns with Vault s philosophy of short lived secrets As such it is not expected that CRLs will grow large the only place a private key is ever returned is to the requesting client this secrets engine does not store generated private keys except for CA certificates In most cases if the key is lost the certificate can simply be ignored as it will expire shortly If a certificate must truly be revoked the normal Vault revocation function can be used and any revocation action will cause the CRL to be regenerated When the CRL is regenerated any expired certificates are removed from the CRL and any revoked expired certificate are removed from secrets engine storage This is an expensive operation Due to the structure of the CRL standard Vault must read all revoked certificates into memory in order to rebuild the CRL and clients must fetch the regenerated CRL This secrets engine does not support multiple CRL endpoints with sliding date windows often such mechanisms will have the transition point a few days apart but this gets into the expected realm of the actual certificate validity periods issued from this secrets engine A good rule of thumb for this secrets engine would be to simply not issue certificates with a validity period greater than your maximum comfortable CRL lifetime Alternately you can control CRL caching behavior on the client to ensure that checks happen more often Often multiple endpoints are used in case a single CRL endpoint is down so that clients don t have to figure out what to do with a lack of response Run Vault in HA mode and the CRL endpoint should be available even if a particular node is down Note Since Vault 1 11 0 with multiple issuers in the same mount point different issuers may have different CRLs depending on subject and key material This means that Vault may need to regenerate multiple CRLs This is again a rationale for keeping TTLs short and avoiding revocation if possible Note Since Vault 1 12 0 we support two complementary revocation mechanisms Delta CRLs which allow for rebuilds of smaller incremental additions to the last complete CRL and OCSP which allows responding to revocation status requests for individual certificates When coupled with the new CRL auto rebuild functionality this means that the revoking step isn t as costly as the CRL isn t always rebuilt on each revocation outside of storage considerations However while the rebuild operation still can be expensive with lots of certificates it will be done on a schedule rather than on demand NotAfter behavior on leaf certificates In Vault 1 11 0 the PKI Secrets Engine has introduced a new leaf not after behavior parameter on issuers vault api docs secret pki leaf not after behavior This allows modification of the issuance behavior should Vault err preventing issuance of a longer lived leaf cert than issuer silently truncate to that of the issuer s NotAfter value or permit longer expirations It is strongly suggested to use err or truncate for intermediates permit is only useful for root certificates as intermediate s NotAfter expiration are checked when validating presented chains In combination with a cascading expiration with longer lived roots perhaps on the range of 2 10 years shorter lived intermediates perhaps on the range of 6 months to 2 years and short lived leaf certificates on the range of 30 to 90 days and the rotation strategies discussed in other sections vault docs secrets pki rotation primitives this should keep the CRLs adequately small Cluster performance and quantity of leaf certificates As mentioned above keeping TTLs short or using no store true and no store cert metadata true and avoiding leases is important for a healthy cluster However it is important to note this is a scale problem 10 1000 long lived stored certificates are probably fine but 50k 100k become a problem and 500k stored unexpired certificates can negatively impact even large Vault clusters even with short TTLs However once these certificates are expired a tidy operation vault api docs secret pki tidy will clean up CRLs and Vault cluster storage Note that organizational risk assessments for certificate compromise might mean certain certificate types should always be issued with no store false even short lived broad wildcard certificates say example com might be important enough to have precise control over revocation However an internal service with a well scoped certificate say service example com might be of low enough risk to issue a 90 day TTL with no store true preventing the need for revocation in the unlikely case of compromise Having a shorter TTL decreases the likelihood of needing to revoke a cert but cannot prevent it entirely and decrease the impact of any such compromise Note As of Vault 1 12 the PKI Secret Engine s Bring Your Own Cert BYOC vault api docs secret pki revoke certificate functionality allows revocation of certificates not previously stored e g issued via a role with no store true This means that setting no store true is now safe to be used globally regardless of importance of issued certificates and their likelihood for revocation You must configure issuing CRL OCSP information in advance This secrets engine serves CRLs from a predictable location but it is not possible for the secrets engine to know where it is running Therefore you must configure desired URLs for the issuing certificate CRL distribution points and OCSP servers manually using the config urls endpoint It is supported to have more than one of each of these by passing in the multiple URLs as a comma separated string parameter Note when using Vault Enterprise s Performance Replication features with a PKI Secrets Engine mount each cluster will have its own CRL this means each cluster s unique CRL address should be included in the AIA information https datatracker ietf org doc html rfc5280 section 5 2 7 field separately or the CRLs should be consolidated and served outside of Vault Note When using multiple issuers in the same mount it is suggested to use the per issuer AIA fields rather than the global config urls variant This is for correctness these fields are used for chain building and automatic CRL detection in certain applications If they point to the wrong issuer s information these applications may break Distribution of CRLs and OCSP Both CRLs and OCSP allow interrogating revocation status of certificates Both of these methods include internal security and authenticity both CRLs and OCSP responses are signed by the issuing CA within Vault This means both are fine to distribute over non secure and non authenticated channels such as HTTP Note The OCSP implementation for GET requests can lead to intermittent 400 errors when an encoded OCSP request contains consecutive characters Until this is resolved it is recommended to use POST based OCSP requests Automate CRL building and tidying Since Vault 1 12 the PKI Secrets Engine supports automated CRL rebuilding including optional Delta CRLs which can be built more frequently than complete CRLs via the config crl endpoint Additionally tidying of revoked and expired certificates can be configured automatically via the config auto tidy endpoint Both of these should be enabled to ensure compatibility with the wider PKIX ecosystem and performance of the cluster Spectrum of revocation support Starting with Vault 1 13 the PKI secrets engine has the ability to support a spectrum of cluster sizes and certificate revocation quantities For users with few revocations or who want a unified view and have the inter cluster bandwidth to support it we recommend turning on auto rebuilding of CRLs cross cluster revocation queues and cross cluster CRLs This allows all consumers of the CRLs to have the most accurate picture of revocations regardless of which cluster they talk to If the unified CRL becomes too big for the underlying storage mechanism or for a single host to build we recommend relying on OCSP instead of CRLs These have much smaller storage entries and the CRL disabled flag is independent of unified crls allowing unified OCSP to remain However when cross cluster traffic becomes too high or if CRLs are still necessary in addition to OCSP we recommend sharding the CRL between different clusters This has been the default behavior of Vault but with the introduction of per cluster templated AIA information the leaf certificate s Authority Information Access AIA info will point directly to the cluster which issued it allowing the correct CRL for this cert to be identified by the application This more correctly mimics the behavior of Let s Encrypt s CRL sharding https letsencrypt org 2022 09 07 new life for crls html This sharding behavior can also be used for OCSP if the cross cluster traffic for revocation entries becomes too high For users who wish to manage revocation manually using the audit logs to track certificate issuance would allow an external system to identify which certificates were issued These can be manually tracked for revocation and a custom CRL can be built vault api docs secret pki combine crls from the same issuer using externally tracked revocations This would allow usage of roles set to no store true so Vault is strictly used as an issuing authority and isn t storing any certificates issued or revoked For the highest of revocation volumes this could be the best option Notably this last approach can either be used for the creation of externally stored unified or sharded CRLs If a single external unified CRL becomes unreasonably large each cluster s certificates could have AIA info point to an externally stored and maintained sharded CRL However Vault has no mechanism to sign OCSP requests at this time What are Cross Cluster CRLs Vault Enterprise supports a clustering mode called Performance Replication vault docs enterprise replication performance replication In a replicated PKI Secrets Engine mount issuer and role information is synced between the Performance Primary and all Performance Secondary clusters However each Performance Secondary cluster has its own local storage of issued certificates and revocations which is not synced In Vault versions before 1 13 this meant that each of these clusters had its own CRL and OCSP data and any revocation requests needed to be processed on the cluster that issued it or BYOC used Starting with Vault 1 13 we ve added two features vault api docs secret pki read crl configuration to Vault Enterprise to help manage this setup more correctly and easily revocation request queues cross cluster revocation true in config crl and unified revocation entries unified crl true in config crl The former allows operators revoking by serial number to request a certificate be revoked regardless of which cluster it was issued on For example if a request goes into the Performance Primary but it didn t issue the certificate it ll write a cross cluster revocation request and mark the results as pending If another cluster already has this certificate in storage it will revoke it and confirm the revocation back to the main cluster An operator can list pending revocations vault api docs secret pki list revocation requests to see the status of these requests To clean up invalid requests e g if the cluster which had that certificate disappeared if that certificate was issued with no store true on the role or if it was an invalid serial number an operator can use tidy vault api docs secret pki tidy with tidy revocation queue true optionally shortening revocation queue safety buffer to remove them quicker The latter allows all clusters to have a unified view of revocations that is to have access to a list of revocations performed by other clusters While the configuration parameter includes crl in the description this applies to both CRLs vault api docs secret pki read issuer crl and the OCSP responder vault api docs secret pki ocsp request When this revocation replication occurs if any cluster considers a cert revoked when another doesn t e g via BYOC revocation of a no store false certificate all clusters will now consider it revoked assuming it hasn t expired Notably the active node of the primary cluster will be used to rebuild the CRL as this can grow large if many clusters have lots of revoked certs an operator might need to disable CRL building disabled true in config crl or increase the storage size vault docs configuration storage raft max entry size As an aside all new cross cluster writes from Performance Secondary up to the Performance Primary are performed synchronously This gives the caller confidence that the request actually went through at the expense of incurring a bit higher overhead for revoking certificates When a node loses its GRPC connection e g during leadership election or being otherwise unable to contact the active primary errors will occur though the local portion of the write if any will still succeed For cross cluster revocation requests due to there being no local write this means that the operation will need to be retried but in the event of an issue writing a cross cluster revocation entry when the cert existed locally the revocation will eventually be synced across clusters when the connection comes back Issuer subjects and CRLs As noted on several GitHub issues https github com hashicorp vault issues 10176 Go s x509 library has an opinionated parsing and structuring mechanism for certificate s Subjects Issuers created within Vault are fine but when using externally created CA certificates these may not be parsed correctly throughout all parts of the PKI In particular CRLs embed a modified copy of the issuer name This can be avoided by using OCSP to track revocation but note that performance characteristics are different between OCSP and CRLs Note As of Go 1 20 and Vault 1 13 Go correctly formats the CRL s issuer name and this notice does not apply https github com golang go commit a367981b4c8e3ae955eca9cc597d9622201155f3 Automate leaf certificate renewal To manage certificates for services at scale it is best to automate the certificate renewal as much as possible Vault Agent has support for automatically renewing requested certificates vault docs agent and proxy agent template certificates based on the validTo field Other solutions might involve using cert manager https cert manager io in Kubernetes or OpenShift backed by the Vault CA Safe minimums Since its inception this secrets engine has enforced SHA256 for signature hashes rather than SHA1 As of 0 5 1 a minimum of 2048 bits for RSA keys is also enforced Software that can handle SHA256 signatures should also be able to handle 2048 bit keys and 1024 bit keys are considered unsafe and are disallowed in the Internet PKI Token lifetimes and revocation When a token expires it revokes all leases associated with it This means that long lived CA certs need correspondingly long lived tokens something that is easy to forget Starting with 0 6 root and intermediate CA certs no longer have associated leases to prevent unintended revocation when not using a token with a long enough lifetime To revoke these certificates use the pki revoke endpoint Safe usage of roles The Vault PKI Secrets Engine supports many options to limit issuance via Roles vault api docs secret pki create update role Careful consideration of construction is necessary to ensure that more permissions are not given than necessary Additionally roles should generally do one thing multiple roles should be preferable over having too permissive roles that allow arbitrary issuance e g allow any name should generally be used sparingly if at all allow any name should generally be set to false this is the default allow localhost should generally be set to false for production services unless listening on localhost is expected Unless necessary allow wildcard certificates should generally be set to false This is not the default due to backwards compatibility concerns This is especially necessary when allow subdomains or allow glob domains are enabled enforce hostnames should generally be enabled for TLS services this is the default allow ip sans should generally be set to false but defaults to true unless IP address certificates are explicitly required When using short TTLs 30 days or with high issuance volume it is generally recommend to set no store to true defaults to false This prevents serial number based revocation but allows higher throughput as Vault no longer needs to store every issued certificate This is discussed more in the Replicated Datasets replicated datasets section below Do not use roles with root certificates issuer ref Root certificates should generally only issue intermediates see the section on CA hierarchy above use a ca hierarchy which doesn t rely on roles Limit key usage and ext key usage don t attempt to allow all usages for all purposes Generally the default values are useful for client and server TLS authentication Telemetry Beyond Vault s default telemetry around request processing PKI exposes count and duration metrics for the issue sign sign verbatim and revoke calls The metrics keys take the form mount path operation failure with labels for namespace and role name Note that these metrics are per node and thus would need to be aggregated across nodes and clusters Auditing Because Vault HMACs audit string keys by default it is necessary to tune PKI secrets mounts to get an accurate view of issuance that is occurring under this mount Note Depending on usage of Vault CRLs and rarely CA chains can grow to be rather large We don t recommend un HMACing the crl field for this reason but note that the recommendations below suggest to un HMAC the certificate response parameter which the CRL can be served in via the pki cert crl API endpoint Additionally the http raw body can be used to return CRL both in PEM and raw binary DER form so it is suggested not to un HMAC that field to not corrupt the log format br br If this is done with only a syslog vault docs audit syslog audit device Vault can deny requests with an opaque 500 Internal Error message after the action has been performed on the server because it was unable to log the message br br The suggested workaround is to either leave the certificate and crl response fields HMACed and or to also enable the file vault docs audit file audit log type Some suggested keys to un HMAC for requests are as follows csr the requested CSR to sign certificate the requested self signed certificate to re sign or when importing issuers Various issuance related overriding parameters such as issuer ref the issuer requested to sign this certificate common name the requested common name alt names alternative requested DNS type SANs for this certificate other sans other non DNS non Email non IP non URI requested SANs for this certificate ip sans requested IP type SANs for this certificate uri sans requested URI type SANs for this certificate ttl requested expiration date of this certificate not after requested expiration date of this certificate serial number the subject s requested serial number key type the requested key type private key format the requested key format which is also used for the public certificate format as well Various role or issuer related generation parameters such as managed key name when creating an issuer the requested managed key name managed key id when creating an issuer the requested managed key identifier ou the subject s organizational unit organization the subject s organization country the subject s country code locality the subject s locality province the subject s province street address the subject s street address postal code the subject s postal code permitted dns domains permitted DNS domains policy identifiers the requested policy identifiers when creating a role and ext key usage oids the extended key usage OIDs for the requested certificate Some suggested keys to un HMAC for responses are as follows certificate the certificate that was issued issuing ca the certificate of the CA which issued the requested certificate serial number the serial number of the certificate that was issued error to show errors associated with the request and ca chain optional due to noise the full CA chain of the issuer of the requested certificate Note These list of parameters to un HMAC are provided as a suggestion and may not be exhaustive The following keys are suggested NOT to un HMAC due to their sensitive nature private key this response parameter contains the private keys generated by Vault during issuance and pem bundle this request parameter is only used on the issuer import paths and may contain sensitive private key material Role Based access Vault supports path based ACL Policies vault tutorials getting started getting started policies for limiting access to various paths within Vault The following is a condensed example reference of ACLing the PKI Secrets Engine These are just a suggestion other personas and policy approaches may also be valid We suggest the following personas Operator a privileged user who manages the health of the PKI subsystem manages issuers and key material Agent a semi privileged user that manages roles and handles revocation on behalf of an operator may also handle delegated issuance This may also be called an administrator or role manager Advanced potentially a power user or service that has access to additional issuance APIs Requester a low level user or service that simply requests certificates Unauthed any arbitrary user or service that lacks a Vault token For these personas we suggest the following ACLs in condensed tabular form Path Operations Operator Agent Advanced Requester Unauthed ca pem Read Yes Yes Yes Yes Yes ca chain Read Yes Yes Yes Yes Yes crl pem Read Yes Yes Yes Yes Yes crl delta pem Read Yes Yes Yes Yes Yes cert serial raw pem Read Yes Yes Yes Yes Yes issuers List Yes Yes Yes Yes Yes issuer issuer ref json der pem Read Yes Yes Yes Yes Yes issuer issuer ref crl der pem Read Yes Yes Yes Yes Yes issuer issuer ref crl delta der pem Read Yes Yes Yes Yes Yes ocsp request Read Yes Yes Yes Yes Yes ocsp Write Yes Yes Yes Yes Yes certs List Yes Yes Yes Yes revoke with key Write Yes Yes Yes Yes roles List Yes Yes Yes Yes roles role Read Yes Yes Yes Yes issue sign role Write Yes Yes Yes Yes issuer issuer ref issue sign role Write Yes Yes Yes config auto tidy Read Yes Yes config ca Read Yes Yes config crl Read Yes Yes config issuers Read Yes Yes crl rotate Read Yes Yes crl rotate delta Read Yes Yes roles role Write Yes Yes issuer issuer ref Read Yes Yes sign verbatim role Write Yes Yes issuer issuer ref sign verbatim role Write Yes Yes revoke Write Yes Yes tidy Write Yes Yes tidy cancel Write Yes Yes tidy status Read Yes Yes config auto tidy Write Yes config ca Write Yes config crl Write Yes config issuers Write Yes config keys Read Write Yes config urls Read Write Yes issuer issuer ref Write Yes issuer issuer ref revoke Write Yes issuer issuer ref sign intermediate Write Yes issuer issuer ref sign self issued Write Yes issuers generate Write Yes issuers import Write Yes intermediate generate Write Yes intermediate cross sign Write Yes intermediate set signed Write Yes keys List Yes key key ref Read Write Yes keys generate Write Yes keys import Write Yes root generate Write Yes root sign intermediate Write Yes root sign self issued Write Yes root rotate Write Yes root replace Write Yes Note With managed keys operators might need access to read the mount point s tunable data vault api docs system mounts Read on sys mounts and may need access to use or manage managed keys vault api docs system managed keys Replicated DataSets When operating with Performance Secondary vault docs enterprise replication architecture clusters certain data sets are maintained across all clusters while others for performance and scalability reasons are kept within a given cluster The following table breaks down by data type what data sets will cross the cluster boundaries For data types that do not cross a cluster boundary read requests for that data will need to be sent to the appropriate cluster that the data was generated on Data Set Replicated Across Clusters Issuers Keys Yes Roles Yes CRL Config Yes URL Config Yes Issuer Config Yes Key Config Yes CRL No Revoked Certificates No Leaf Issued Certificates No Certificate Metadata No The main effect is that within the PKI secrets engine leaf certificates issued with no store set to false are stored local to the cluster that issued them This allows for both primary and Performance Secondary vault docs enterprise replication architecture clusters active node to issue certificates for greater scalability As a result these certificates metadata and any revocations are visible only on the issuing cluster This additionally means each cluster has its own set of CRLs distinct from other clusters These CRLs should either be unified into a single CRL for distribution from a single URI or server operators should know to fetch all CRLs from all clusters Cluster scalability Most non introspection operations in the PKI secrets engine require a write to storage and so are forwarded to the cluster s active node for execution This table outlines which operations can be executed on performance standby nodes and thus scale horizontally across all nodes within a cluster Path Operations ca pem Read cert em serial number em Read cert ca chain Read config crl Read certs List ca chain Read crl pem Read issue Update sup sup revoke em serial number em Read sign Update sup sup sign verbatim Update sup sup Only if the corresponding role has no store set to true generate lease set to false and no metadata is being written If generate lease is true the lease creation will be forwarded to the active node if no store is false the entire request will be forwarded to the active node If no store cert metadata false and metadata argument is provided the entire request will be forwarded to the active node PSS support Go lacks support for PSS certificates keys and CSRs using the rsaPSS OID 1 2 840 113549 1 1 10 It requires all RSA certificates keys and CSRs to use the alternative rsaEncryption OID 1 2 840 113549 1 1 1 When using OpenSSL to generate CAs or CSRs from PKCS8 encoded PSS keys the resulting CAs and CSRs will have the rsaPSS OID Go and Vault will reject them Instead use OpenSSL to generate or convert to a PKCS 1v1 5 private key file and use this to generate the CSR Vault will depending on the role and the signing mechanism still use a PSS signature despite the rsaEncryption OID on the request as the SubjectPublicKeyInfo and SignatureAlgorithm fields are orthogonal When creating an external CA and importing it into Vault ensure that the rsaEncryption OID is present on the SubjectPublicKeyInfo field even if the SignatureAlgorithm is PSS based These certificates generated by Go with rsaEncryption OID but PSS based signatures are otherwise compatible with the fully PSS based certificates OpenSSL and NSS support parsing and verifying chains using this type of certificate Note that some TLS implementations may not support these types of certificates if they do not support rsa pss rsae signature schemes Additionally some implementations allow rsaPSS OID certificates to contain restrictions on signature parameters allowed by this certificate but Go and Vault do not support adding such restrictions At this time Go lacks support for signing CSRs with the PSS signature algorithm If using a managed key that requires a RSA PSS algorithm such as GCP or a PKCS 11 HSM as a backing for an intermediate CA key attempting to generate a CSR via pki intermediate generate kms will fail signature verification In this case the CSR will need to be generated outside of Vault and the signed final certificate can be imported into the mount Go additionally lacks support for creating OCSP responses with the PSS signature algorithm Vault will automatically downgrade issuers with PSS based revocation signature algorithms to PKCS 1v1 5 but note that certain KMS devices like HSMs and GCP may not support this with the same key As a result the OCSP responder may fail to sign responses returning an internal error Issuer storage migration issues When Vault migrates to the new multi issuer storage layout on releases prior to 1 11 6 1 12 2 and 1 13 and storage write errors occur during the mount initialization and storage migration process the default issuer may not have the correct ca chain value and may only have the self reference These write errors most commonly manifest in logs as a message like failed to persist issuer chain to disk cause and indicate that Vault was not stable at the time of migration Note that this only occurs when more than one issuer exists within the mount such as an intermediate with root To fix this manually until a new version of Vault automatically rebuilds the issuer chain a rebuild of the chains can be performed curl X PATCH H Content Type application merge patch json H X Vault Request true H X Vault Token vault print token d manual chain self https issuer default curl X PATCH H Content Type application merge patch json H X Vault Request true H X Vault Token vault print token d manual chain https issuer default This temporarily sets the manual chain on the default issuer to a self chain only before reverting it back to automatic chain building This triggers a refresh of the ca chain field on the issuer and can be verified with vault read pki issuer default Issuer Constraints Enforcement Starting with versions 1 18 3 1 18 3 ent 1 17 10 ent and 1 16 14 ent Vault performs additional verifications when creating or signing leaf certificates for issuers that have constraints extensions This verification includes validating extended key usage name constraints and correct copying of the issuer name onto the certificate Certificates issued without this verification might not be accepted by end user applications Problems with issuance arising from this validation should be fixed by changing the issuer certificate itself to avoid more problems down the line It is possible to completely disable verification by setting environment variable VAULT DISABLE PKI CONSTRAINTS VERIFICATION to true Warning The use of environment variable VAULT DISABLE PKI CONSTRAINTS VERIFICATION should be considered as a last resort Tutorial Refer to the Build Your Own Certificate Authority CA vault tutorials secrets management pki engine guide for a step by step tutorial Have a look at the PKI Secrets Engine with Managed Keys vault tutorials enterprise managed key pki for more about how to use externally managed keys with PKI API The PKI secrets engine has a full HTTP API Please see the PKI secrets engine API vault api docs secret pki for more details |
vault PKI secrets engine Certificate Management Protocol v2 CMPv2 EnterpriseAlert inline true page title Certificate Management Protocol v2 CMPv2 within Vault PKI Secrets Engines This document summarizes Vault s PKI Secrets Engine layout docs implementation of the CMPv2 protocol https datatracker ietf org doc html rfc4210 EnterpriseAlert inline true An overview of the Certificate Management Protocol v2 implementation within Vault | ---
layout: docs
page_title: Certificate Management Protocol v2 (CMPv2) within Vault | PKI - Secrets Engines
description: An overview of the Certificate Management Protocol (v2) implementation within Vault.
---
# PKI secrets engine - Certificate Management Protocol v2 (CMPv2) <EnterpriseAlert inline="true" />
This document summarizes Vault's PKI Secrets Engine
implementation of the [CMPv2 protocol](https://datatracker.ietf.org/doc/html/rfc4210) <EnterpriseAlert inline="true" />,
its configuration, and limitations.
## What is Certificate Management Protocol v2 (CMPv2)?
The CMP protocol is an IETF standardized protocol, [RFC 4210](https://datatracker.ietf.org/doc/html/rfc4210),
that allows clients to acquire client certificates and their associated Certificate
Authority (CA) certficates.
## Enabling CMPv2 support on a Vault PKI mount
To configure an existing mount to serve CMPv2 clients, the following steps are
required, which are broken down into three main categories:
1. [Configuring an Issuer](#configuring-an-issuer)
1. [Authentication mechanisms](#configuring-cmpv2-authentication)
1. [Updating PKI tunable parameters](#updating-the-pki-mount-tunable-parameters)
1. [PKI CMPv2 configuration](#enabling-cmpv2)
### Configuring an Issuer
CMPv2 is a bit unique, in that it uses the Issuer CA certificate to sign the
CMP messages. This means your issuer must have the `DigitalSignature` key
usage.
Existing CA issuers likely do not have this, so you will need to generate a new
issuer (likely an intermediate) that has this property. If you are configuring PKI
for the first time or creating a new issuer, ensure you set `key_usage` to,
as an example, `CRL,CASign,DigitalSignature`.
See [Generate intermediate CSR](/vault/api-docs/secret/pki#generate-intermediate-csr)
### Configuring CMPv2 Authentication
At this time, Vault's implementation of CMPv2 supports only
[Certificate TLS authentication](/vault/docs/auth/cert), where clients proof
of posession of a TLS client certificate authenticates them to Vault.
Authentication leverages a separate Vault authentication
mount, within the same namespace, to validate the client provided credentials
along with the client's ACL policy to enforce.
For proper accounting, a mount supporting CMPv2 authentication should be
dedicated to this purpose, not shared with other workflows. In other words,
create a new certificate auth mount for CMPv2 even if you already have one
another in use for other purposes.
When setting up the authentication mount for CMPv2 clients, the token type must
be configured to return [batch tokens](/vault/docs/concepts/tokens#batch-tokens).
Batch tokens are required to avoid an excessive amount of leases being generated
and persisted as every CMPv2 incoming request needs to be authenticated.
The path within an ACL policy must match the `cmp` path underneath the
PKI mount. The path to use can be the default `cmp` path or a role based one.
If using the `sign-verbatim` as a path policy, the following
ACL policy will allow an authenticated client access the required PKI CMP path.
```
path “pki/cmp” {
capabilities=[“update”, “create”]
}
```
For a role base path policy, this sample policy can be used
```
path “pki/roles/my-role-name/cmp” {
capabilities=[“update”, “create”]
}
```
#### Updating the PKI mount tunable parameters
Once the authentication mount has been created and configured, the authentication mount's accessor
will need to be captured and added within the PKI mount's [delegated auth accessors](/vault/api-docs/system/mounts#delegated_auth_accessors).
To get an authentication mount's accessor field, the following command can be used.
```shell-session
$ vault read -field=accessor sys/auth/auth/cert
```
For CMP to work within certain clients, a few response headers need to be explicitly
allowed, trailing slashes must be trimmed, and the list of accessors the mount can delegate authentication towards
must be configured. The following will grant the required response headers, you will need to replace the values for
the `delegated-auth-accessors` to match your values.
```shell-session
$ vault secrets tune \
-allowed-response-headers="Content-Transfer-Encoding" \
-allowed-response-headers="Content-Length" \
-allowed-response-headers="WWW-Authenticate" \
-delegated-auth-accessors="auth_cert_4088ac2d" \
-trim-request-trailing-slashes="true" \
pki
```
#### Enabling CMPv2
Enabling CMP is a matter of writing to the `config/cmp` endpoint, to set it
enabled and configure default path policy and authentication.
```shell-session
vault write pki/config/cmp -<<EOC
{
"enabled": true,
"default_path_policy": "role:example-role",
"authenticators": {
"cert": {
"accessor": "auth_cert_4088ac2d"
}
},
"audit_fields": ["common_name", "alt_names", "ip_sans", "uri_sans"]
}
EOC
```
Of course, substituting your own role and accessor values. After this, the
CMP endpoints will be able to handle client requests, authenticated with the
previously configured Cert Auth method.
## Limitations
The initial release of CMPv2 support is intentionally limited to a subset of the
protocol, covering Initialization, Certification, and Key Update, over HTTP.
In particular, the following are not yet supported:
* Basic authentication scheme using PasswordBasedMac
* Revocation
* CRL fetching via CMP itself.
* CA creation/update operations.
Note that CMPv2 is not integrated with these existing Vault PKI features:
* Certificate Metadata - CMPv2 has no means of providing metadata.
* Certificate Issuance External Policy Service [(CIEPS)](/vault/docs/secrets/pki/cieps)
| vault | layout docs page title Certificate Management Protocol v2 CMPv2 within Vault PKI Secrets Engines description An overview of the Certificate Management Protocol v2 implementation within Vault PKI secrets engine Certificate Management Protocol v2 CMPv2 EnterpriseAlert inline true This document summarizes Vault s PKI Secrets Engine implementation of the CMPv2 protocol https datatracker ietf org doc html rfc4210 EnterpriseAlert inline true its configuration and limitations What is Certificate Management Protocol v2 CMPv2 The CMP protocol is an IETF standardized protocol RFC 4210 https datatracker ietf org doc html rfc4210 that allows clients to acquire client certificates and their associated Certificate Authority CA certficates Enabling CMPv2 support on a Vault PKI mount To configure an existing mount to serve CMPv2 clients the following steps are required which are broken down into three main categories 1 Configuring an Issuer configuring an issuer 1 Authentication mechanisms configuring cmpv2 authentication 1 Updating PKI tunable parameters updating the pki mount tunable parameters 1 PKI CMPv2 configuration enabling cmpv2 Configuring an Issuer CMPv2 is a bit unique in that it uses the Issuer CA certificate to sign the CMP messages This means your issuer must have the DigitalSignature key usage Existing CA issuers likely do not have this so you will need to generate a new issuer likely an intermediate that has this property If you are configuring PKI for the first time or creating a new issuer ensure you set key usage to as an example CRL CASign DigitalSignature See Generate intermediate CSR vault api docs secret pki generate intermediate csr Configuring CMPv2 Authentication At this time Vault s implementation of CMPv2 supports only Certificate TLS authentication vault docs auth cert where clients proof of posession of a TLS client certificate authenticates them to Vault Authentication leverages a separate Vault authentication mount within the same namespace to validate the client provided credentials along with the client s ACL policy to enforce For proper accounting a mount supporting CMPv2 authentication should be dedicated to this purpose not shared with other workflows In other words create a new certificate auth mount for CMPv2 even if you already have one another in use for other purposes When setting up the authentication mount for CMPv2 clients the token type must be configured to return batch tokens vault docs concepts tokens batch tokens Batch tokens are required to avoid an excessive amount of leases being generated and persisted as every CMPv2 incoming request needs to be authenticated The path within an ACL policy must match the cmp path underneath the PKI mount The path to use can be the default cmp path or a role based one If using the sign verbatim as a path policy the following ACL policy will allow an authenticated client access the required PKI CMP path path pki cmp capabilities update create For a role base path policy this sample policy can be used path pki roles my role name cmp capabilities update create Updating the PKI mount tunable parameters Once the authentication mount has been created and configured the authentication mount s accessor will need to be captured and added within the PKI mount s delegated auth accessors vault api docs system mounts delegated auth accessors To get an authentication mount s accessor field the following command can be used shell session vault read field accessor sys auth auth cert For CMP to work within certain clients a few response headers need to be explicitly allowed trailing slashes must be trimmed and the list of accessors the mount can delegate authentication towards must be configured The following will grant the required response headers you will need to replace the values for the delegated auth accessors to match your values shell session vault secrets tune allowed response headers Content Transfer Encoding allowed response headers Content Length allowed response headers WWW Authenticate delegated auth accessors auth cert 4088ac2d trim request trailing slashes true pki Enabling CMPv2 Enabling CMP is a matter of writing to the config cmp endpoint to set it enabled and configure default path policy and authentication shell session vault write pki config cmp EOC enabled true default path policy role example role authenticators cert accessor auth cert 4088ac2d audit fields common name alt names ip sans uri sans EOC Of course substituting your own role and accessor values After this the CMP endpoints will be able to handle client requests authenticated with the previously configured Cert Auth method Limitations The initial release of CMPv2 support is intentionally limited to a subset of the protocol covering Initialization Certification and Key Update over HTTP In particular the following are not yet supported Basic authentication scheme using PasswordBasedMac Revocation CRL fetching via CMP itself CA creation update operations Note that CMPv2 is not integrated with these existing Vault PKI features Certificate Metadata CMPv2 has no means of providing metadata Certificate Issuance External Policy Service CIEPS vault docs secrets pki cieps |
vault The PKI secrets engine for Vault generates TLS certificates layout docs Since Vault 1 11 0 Vault s PKI Secrets Engine supports multiple issuers in a single mount point By using the certificate types below rotation can be page title PKI Secrets Engine Rotation Primitives PKI secrets engine rotation primitives | ---
layout: docs
page_title: 'PKI - Secrets Engine: Rotation Primitives'
description: The PKI secrets engine for Vault generates TLS certificates.
---
# PKI secrets engine - rotation primitives
Since Vault 1.11.0, Vault's PKI Secrets Engine supports multiple issuers in a
single mount point. By using the certificate types below, rotation can be
accomplished in various situations involving both root and intermediate CAs
managed by Vault.
## X.509 certificate fields
X.509 is a complex specification; modern implementations tend to refer to
[RFC 5280](https://datatracker.ietf.org/doc/html/rfc5280) for specific
details. For validation of certificates, both RFC 5280 and the TLS
validation [RFC 6125](https://datatracker.ietf.org/doc/html/rfc6125) are
important for understanding how to achieve rotation.
The following is a simplification of these standards for the purpose of
this document.
Every X.509 certificate begins with an asymmetric key pair, using an algorithm
like RSA or ECDSA. This key pair is used to create a Certificate Signing
Request (CSR), which contains a set of fields the requester would like in the
final certificate (but, it is up to the Certificate Authority (CA) to decide what
fields to take from the CSR and which to override). The CSR also contains the
public key of the pair, which is signed by the private key of the key pair to
prove possession. Usually, the requester would ask for attributes in the
Subject field of the CSR or in the Subject Alternative Name extension CSR to
be respected in the final certificate. It is up to the CA if these values are
trusted or not. When approved by the issuing authority (which may be backed by
this asymmetric key itself in the case of a root self-signed certificate), the
authority attaches the Subject of _its_ certificate to the issued certificate in
the Issuer field, assigns a unique serial number to the issued certificate, and
signs the set of fields with its private key, thus creating the certificate.
There are some important restrictions here:
- One certificate can only have one Issuer, but this issuer is identified by
the Subject on the issuing certificate and its public key.
- One key pair can be used for multiple certificates, but one certificate can
only have one backing key material.
The following fields on the final certificate are relevant to rotation:
- The backing [public](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.7)
and private key material (Subject Public Key Info).
- Note that the private key is not included in the certificate but is
uniquely determined by the public key material.
- The [Subject](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.6) of the certificate.
- This identifies the entity to which the certificate was issued. While the
SAN values (in the [Subject Alternative Name](https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.6)
extension) is useful when validating TLS Server certificates against the
negotiated hostname and URI, it isn't generally relevant for the purposes
of validating intermediate certificate chains or in rotation.
- The [Validity](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.5)
period of this certificate.
- Notably, RFC 5280 does not place any requirements around the issued
certificate's validity period relative to the validity period of the
issuing certificate. However, it [does state](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.5)
that certificates ought to be revoked if their status cannot be maintained
up to their notAfter date. This is why Vault 1.11's `/pki/issuer/:issuer_ref`
configuration endpoint maintains the `leaf_not_after_behavior` per-issuer
rather than per-role.
- Additionally, some browsers will place ultimate trust in the certificates
in their trust stores, even when these certificates are expired.
- Note that this only applies to certificates in the trust store; validity
periods will still be enforced for certificates not in the store (such
as intermediates).
- The [Issuer](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.4) and
[signatureValue](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.1.3)
of this certificate.
- In the issued certificate's Issuer field, the issuing certificate places
its own Subject value. This allows the issuer to be identified later
(without having to try signature validation against every known local
certificate), when validating the presented certificate and chain.
- The signature over the entire certificate (by the issuer's private key)
is then placed in the signatureValue field.
- The optional [Authority Key Identifier](https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.1)
field.
- This field can contain either (or both) of two values:
- The hash of the issuer's public key. This extension is set and this
value is filled in by Vault.
- The Issuer's Subject and Serial Number. This value is not set by Vault.
- The latter is a dangerous restriction for the purposes of rotation: it
prevents cross-signing and reissuance as the new issuing certificates
(while having the same backing key material) will have different serial
numbers. See the [Limitations of Primitives](#limitations-of-primitives)
section below for more information on this restriction.
- The [Serial Number](https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.2)
of this certificate.
- This field is unique to a specific issuer; when a certificate is
reissued by its parent authority, it will always have a different serial
number field.
- The [CRL distribution](https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.13)
point field.
- This is a field detailing where a CRL is expected to exist for this
certificate and under which CRL issuers (defaulting to the issuing
certificate itself) the CRL is expected to be signed by. This is mostly
informational and for server software like nginx, Vault's Cert Auth method,
and Apache, CRLs are provided to the server, rather than having the
server fetch CRLs for certificates automatically.
- Note that root certificates (in browsers trust stores) are generally not
considered revocable. However, if an intermediate is revoked by serial,
it will appear on its parent's CRL, and may prevent rotation from
happening.
## X.509 rotation primitives
Rotation (from an organizational standpoint) can only safely happen with
certain intermediate X.509 certificates being issued. To distinguish the two
types of certificates used to achieve rotation, this document notates them
as _primitives_.
Rotation of an end-entity certificate is trivial from an X.509 trust chain
perspective; this process happens every day and should only depend on what is
in the trust store and not the end-entity certificate itself. In Vault, the
requester would hit the various issuance endpoints (`/pki/issue/:name` or
`/pki/sign/:name` -- or use the unsafe `/pki/sign-verbatim`) and swap out the
old certificate with the new certificate and reload the configuration or
restart the service. Other parts of the organizations might use
[ACME](https://datatracker.ietf.org/doc/html/rfc8555) for certificate issuance
and rotation, especially if the service is public-facing (and thus needs to
be issued by a Public CA). Given it was signed by a trusted root, any devices
connecting to the service would not know the difference.
Rotation of intermediate certificates is almost as easy. Assuming a decent
operational setup (wherein during end-entity issuance, the full certificate
chain is updated in the service's configuration), this should be as easy as
creating a new intermediate CA, signing it against the root CA, and then
beginning issuance against the new intermediate certificate. In Vault, if
the intermediate is generated in an existing mount path (or is moved into
such), the requesting entity shouldn't care much. Under ACME, Let's Encrypt
has successfully rotated intermediates to present a cross-signed chain
([for older Android devices](https://letsencrypt.org/2020/12/21/extending-android-compatibility.html)).
Assuming the old intermediate's parent(s) are still valid and trusted,
certificates issued under old intermediates should continue to validate.
The hard part of rotation--calling for the use of these primitives--is
rotating root certificates. These live in every device's trust store and
are hard to update from an organization-wide operational perspective.
Unless the organization can swap out roots almost instantaneously and
simultaneously (e.g., via an agent) with no missed devices, this process
will likely span months.
To make this process lower risk, there are various primitive certificate
types that use the [above certificate fields](#x-509-certificate-fields).
Key to their success is the following note:
~> Note: While certificates are added to the trust store, it is ultimately
the associated key material that determines trust: two issuer certificates
with the same subject but different public keys cannot validate the same
leaf certificate; only if the keys are the same can this occur.
### Cross-Signed primitive
This is the most common type of rotation primitive. A common CSR is signed by
two CAs, resulting in two certificates. These certificates must have the same
Subject (but may have different Issuers and will have different Serial Numbers)
and the same backing key material, to allow certificates they sign to be
trusted by either variant.
Note that, due to restrictions in how end-entity certificates are used and
validated (services and validation libraries expect only one), cross-signing
most typically only applies to intermediate.
#### A note on Cross-Signed roots
Technically, cross-signing can occur between two roots, allowing trust bundles
with either root to validate certs issued through the other. However, this
process creates a certificate that is effectively an intermediate (as it is
no longer self-signed) and usually must be served alongside the trust chain.
Given this restriction, it's preferable to instead cross-sign the top-level
intermediates under the root unless strictly necessary when the old root
certificate has been used to directly issue leaf certificates.
So, the rest of this process flow assumes an intermediate is being
cross-signed as this is more common.
##### Process flow
```
-------------------
| generate key pair | -------------> ...
------------------- ...
| | ...
-------------- -------------- ...
| generate CSR | | generate CSR | ...
-------------- -------------- ...
| | ...
----------- ----------- ...
| signed by | | signed by | ...
| root A | | root B | ...
----------- ----------- ...
```
Here, a key pair was generated at some point in time. Two CSRs are created and
sent to two different root authorities (Root A and Root B). These result in two
separate certificates (potentially with different validity periods) with the
same Subject and same backing key material.
Note that this cross-signing need not happen simultaneously; there could be a
gap of several years between the first and second certificate. Additionally,
there's no limit on the number of cross-signed "duplicate" (used loosely--with
the same subject and key material) certificates: this could be cross-signed
by many different root certificates if necessary and desired.
##### Certificate hierarchy
```
-------- --------
| root A | | root B |
-------- --------
| |
---------------- ----------------
| intermediate C | <- same key material -> | intermediate D |
---------------- | ----------------
|
-------------------
| leaf certificates |
-------------------
```
The above process results in two trust paths: either of root A or root B (or
both) could exist in the client's trust stores and the leaf certificate would
validate correctly. Because the same key material is used for both intermediate
certificates (C and D), the issued leaf certificate's signature field would
be the same regardless of which intermediate was contacted.
Cross-signing is thus a unifying primitive; two separate trust paths now join
into a single one, by having leaf certificate's issuer field to point to two
separate paths (via duplication of the certificate in the chain) and would be
conditionally validated based on which root is present in the trust store.
This construct is documented and used in several places:
- https://letsencrypt.org/certificates/
- https://scotthelme.co.uk/cross-signing-alternate-trust-paths-how-they-work/
- https://security.stackexchange.com/questions/14043/what-is-the-use-of-cross-signing-certificates-in-x-509
#### Execution in Vault
To create a cross-signed certificate in Vault, use the [`/intermediate/cross-sign`
endpoint](/vault/api-docs/secret/pki#generate-intermediate-csr). Here, when creating
a cross-signature to all `cert B` to be validated by `cert A`, provide the values
(`key_ref`, all Subject parts, &c) for `cert B` during intermediate generation.
Then sign this CSR (using the [`/issuer/:issuer_ref/sign-intermediate`
endpoint](/vault/api-docs/secret/pki#sign-intermediate)) with `cert A`'s reference
and provide necessary values from `cert B` (e.g., Subject parts). `cert A` may
live outside Vault. Finally, import the cross-signed certificate into Vault
[using the `/issuers/import/cert` endpoint](/vault/api-docs/secret/pki#import-ca-certificates-and-keys).
If this process succeeded, and both `cert A` and `cert B` and their key
material lives in Vault, the newly imported cross-signed certificate
will have a `ca_chain` response field [during read](/vault/api-docs/secret/pki#read-issuer)
containing `cert A`, and `cert B`'s `ca_chain` will contain the cross-signed
cert and its `ca_chain` value.
~> Note: Regardless of issuer type, is important to provide all relevant
parameters as they were originally; Vault does not infer e.g., the Subject
name parameters from the existing issuer; it merely reuses the same key
material.
##### Notes on `manual_chain`
If an intermediate is cross-signed and imported into the same mount as its
pair, Vault will not detect the cross-signed pairs during automatic chain
building. As a result, leaf issuance will have a chain that only includes
one of these pairs of chains. This is because the leaf issuance's `ca_chain`
parameter copies the value from signing issuer directly, rather than computing
its own copy of the chain.
To fix this, update the `manual_chain` field on the [issuers](/vault/api-docs/secret/pki#update-issuer)
to include the chains of both pairs. For instance, given `intA` signed by
`rootA` and `intB` signed by `rootB` as its cross-signed version, one
could do the following:
```
$ vault patch pki/issuer/intA manual_chain=self,rootA,intB,rootB
$ vault patch pki/issuer/intB manual_chain=self,rootB,intA,rootA
```
This will ensure that issuance with either copy of the intermediate reports
the full cross-signed chain when signing leaf certs.
### Reissuance primitive
The second most common type of rotation primitive. In this scheme, the existing
key material is used to generate a new certificate, usually at a much later
point in time from the existing issuance.
While similar to the cross-signed primitive, this one differs in that usually
the reissuance happens after the original certificate expires or is close to
expiration and is reissued by the original root CA. In the event of a
self-signed certificate (e.g., a root certificate), this parent certificate
would be itself. In both cases, this changes the contents of the certificate
(due to the new serial number) but allows all existing leaf signatures to
still validate.
Unlike the cross-signed primitive, this primitive type can be used on all
types of certificates (including leaves, intermediates, and roots).
#### Process flow
```
-------------------
| generate key pair | ---------------> ...
------------------- ...
| | ...
-------------- -------------- ...
| generate CSR | <-> | generate CSR | ...
-------------- -------------- ...
| | ...
------------------ ------------------ ...
| signed by issuer | -> | signed by issuer | -> ...
------------------ ------------------ ...
```
In this process flow, a single key pair is generated at some point in time
and stored. The CSR (with same requested fields) is generated from this
common key material and signed by the same issuer at multiple points in
time, preserving all critical fields (Subject, Issuer, &c). While there is
strictly no limit on the number of times a key can be reissued, at some point
safety would dictate the key material should be rotated instead of being
continually reissued.
#### Certificate hierarchy
```
------
-----------| root |-------------
/ ------ \
| |
--------------- ---------------
| original cert | <- same key material -> | reissued cert |
--------------- | ---------------
|
-------------------
| leaf certificates |
-------------------
```
Note that while this again results in two trust paths, depending on which
intermediate certificate is presented and is still valid, only a root need be
trusted. When a reissued certificate is a root certificate, the issuance link is
simply self-loop. But, in this case, note that both certificates are
(technically) valid issuers of each other. This means it should be possible to
provide a reissued root certificate in the TLS certificate chain and have it
chain back to an existing root certificate in a trust store.
This primitive type is thus an incrementing primitive; the life cycle of an
existing key is extended into the future by issuing a new certificate with the
same key material from the existing authority.
#### Execution in Vault
To create a reissued root certificate in Vault, use [`/issuers/generate/root/existing`
endpoint](/vault/api-docs/secret/pki#generate-root). This allows the generation of a new
root certificate with the existing key material (via the `key_ref` request parameter).
If this process succeeded, when [reading the issuer](/vault/api-docs/secret/pki#read-issuer)
(via `GET /issuer/:issuer_ref`), both issuers (old and reissued) will appear in
each others' `ca_chain` response field (unless prevented so by a `manual_chain`
value).
To create a reissued intermediate certificate in Vault, this is a three step
process:
1. Use the [`/issuers/generate/intermediate/existing`
endpoint](/vault/api-docs/secret/pki#generate-intermediate-csr)
to generate a new CSR with the existing key material with the `key_ref`
request parameter.
2. Sign this CSR via the same signing process under the same issuer. This
step is specific to the parent CA, which may or may not be Vault.
3. Finally, use the [`/intermediate/set-signed` endpoint](/vault/api-docs/secret/pki#import-ca-certificates-and-keys)
to import the signed certificate from step 2.
If the process to reissue an intermediate certificate succeeded, when
[reading the issuer](/vault/api-docs/secret/pki#read-issuer) (via
`GET /issuer/:issuer_ref`), both issuers (old and reissued) will have
the same `ca_chain` response field, except for the first entry (unless
prevented so by a `manual_chain` value).
~> Note: Regardless of issuer type, is important to provide all relevant
parameters as they were originally; Vault does not infer e.g., the Subject
name parameters from the existing issuer; it merely reuses the same key
material.
### Temporal primitives
We can use the above primitive types to rotate roots and intermediates to new
keys and extend their lifetimes. This time-based rotation is what ultimately
allows us to rotate root certificates.
There's two main variants of this: a **forward** primitive, wherein an old
certificate is used to bless new key material, and a **backwards** primitive,
wherein a new certificate is used to bless old key material. Both of these
primitives are independently used by Let's Encrypt in the aforementioned
chain of trust document:
- The link from DST Root CA X3 to ISRG Root X1 is an example of a forward
primitive.
- The link from ISRG Root X1 to R3 (which was originally signed by DST Root
CA X3) is an example of a backwards primitive.
For most organizations with a hierarchical structured CA setup, cross-signing
all intermediates with both the new and old root CAs is sufficient for root
rotation.
However, for organizations which have directly issued leaf certificates from a
root, the old root will need to be reissued under the new root (with shorter
duration) to allow these certificates to continue to validate. This combines
both of the above primitives (cross-signing and reissuance) into a single
backwards primitive step. In the future, these organizations should probably
move to a more standard, hierarchical setup.
### Limitations of primitives
The certificate's [Authority Key Identifier](https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.1)
extension field may contain either or both of the issuer's keyIdentifier
(a hash of the public key) or both the issuer's Subject and Serial Number
fields. Generating certificates with the latter enabled (luckily not possible
in Vault, especially so since Vault uses strictly random serial numbers)
prevents building a proper cross-signed chain without re-issuing the same
serial number, which will not work with most browsers' trust stores and
validation engines, due to [caching of
certificates](https://support.mozilla.org/en-US/kb/Certificate-contains-the-same-serial-number-as-another-certificate)
used in successful validations. In the strictest sense, when using a
cross-signing primitive (from a different CA), the intermediate could be reissued
with the same serial number, assuming no previous certificate was issued by that
CA with that serial. This does not work when using a reissuance primitive as these
are technically the same authority and thus this authority must issue
certificates with unique serial numbers.
## Suggested root rotation procedure
The following is a suggested process for achieving root rotation easily and
without (outage) impact to the broader organization, assuming [best
practices](/vault/docs/secrets/pki/considerations#use-a-ca-hierarchy) are
being followed. Some adaption will be necessary.
Note that this process takes time. How much time is dependent on the
automation level and operational awareness of the organization.
1. [Generate](/vault/api-docs/secret/pki#generate-root) the new root
certificate. For clarity, it is suggested to use a new common name
to distinguish it from the old root certificate. Key material need
not be the same.
2. [Cross-sign](#cross-signed-primitive) all existing intermediates.
It is important to update the manual chain on the issuers as discussed
in that section, as we assume servers are configured to combine the
`certificate` field with the `ca_chain` field on renewal and issuance,
thus getting the cross-signed intermediates.
3. Encourage rotation to pickup the new cross-signed intermediates. With
short-lived certificates, this should [happen
automatically](/vault/docs/secrets/pki/considerations#automate-leaf-certificate-renewal).
However, for some long-lived certs, it is suggested to rotate them
manually and proactively. This step takes time, and depends on the
types of certificates issued (e.g., server certs, code signing, or client
auth).
4. Once _all_ chains have been updated, new systems can be brought online
with only the new root certificate, and connect to all existing systems.
5. Existing systems can now be migrated with a one-shot root switch: the
new root can be added and the old root can be removed at the same time.
Assuming the above step 3 can be achieved in a reasonable amount of time,
this decreases the time it takes to move the majority of systems over to
fully using the new root and no longer trusting the old root. This step
also takes time, depending on how quickly the organization can migrate
roots and ensure all such systems are migrated. If some systems are
offline and only infrequently online (or, if they have hard-coded
certificate stores and need to reach obsolescence first), the organization
might not be ready to move on to future steps.
6. At this point, since all systems now use the new root, it is safe to remove
or archive the old root and intermediates, updating the manual chain to
point strictly to the new intermediate+root.
At this point, rotation is fully completed.
## Tutorial
Refer to the [Build Your Own Certificate Authority (CA)](/vault/tutorials/secrets-management/pki-engine)
guide for a step-by-step tutorial.
Have a look at the [PKI Secrets Engine with Managed Keys](/vault/tutorials/enterprise/managed-key-pki)
for more about how to use externally managed keys with PKI.
## API
The PKI secrets engine has a full HTTP API. Please see the
[PKI secrets engine API](/vault/api-docs/secret/pki) for more
details. | vault | layout docs page title PKI Secrets Engine Rotation Primitives description The PKI secrets engine for Vault generates TLS certificates PKI secrets engine rotation primitives Since Vault 1 11 0 Vault s PKI Secrets Engine supports multiple issuers in a single mount point By using the certificate types below rotation can be accomplished in various situations involving both root and intermediate CAs managed by Vault X 509 certificate fields X 509 is a complex specification modern implementations tend to refer to RFC 5280 https datatracker ietf org doc html rfc5280 for specific details For validation of certificates both RFC 5280 and the TLS validation RFC 6125 https datatracker ietf org doc html rfc6125 are important for understanding how to achieve rotation The following is a simplification of these standards for the purpose of this document Every X 509 certificate begins with an asymmetric key pair using an algorithm like RSA or ECDSA This key pair is used to create a Certificate Signing Request CSR which contains a set of fields the requester would like in the final certificate but it is up to the Certificate Authority CA to decide what fields to take from the CSR and which to override The CSR also contains the public key of the pair which is signed by the private key of the key pair to prove possession Usually the requester would ask for attributes in the Subject field of the CSR or in the Subject Alternative Name extension CSR to be respected in the final certificate It is up to the CA if these values are trusted or not When approved by the issuing authority which may be backed by this asymmetric key itself in the case of a root self signed certificate the authority attaches the Subject of its certificate to the issued certificate in the Issuer field assigns a unique serial number to the issued certificate and signs the set of fields with its private key thus creating the certificate There are some important restrictions here One certificate can only have one Issuer but this issuer is identified by the Subject on the issuing certificate and its public key One key pair can be used for multiple certificates but one certificate can only have one backing key material The following fields on the final certificate are relevant to rotation The backing public https datatracker ietf org doc html rfc5280 section 4 1 2 7 and private key material Subject Public Key Info Note that the private key is not included in the certificate but is uniquely determined by the public key material The Subject https datatracker ietf org doc html rfc5280 section 4 1 2 6 of the certificate This identifies the entity to which the certificate was issued While the SAN values in the Subject Alternative Name https datatracker ietf org doc html rfc5280 section 4 2 1 6 extension is useful when validating TLS Server certificates against the negotiated hostname and URI it isn t generally relevant for the purposes of validating intermediate certificate chains or in rotation The Validity https datatracker ietf org doc html rfc5280 section 4 1 2 5 period of this certificate Notably RFC 5280 does not place any requirements around the issued certificate s validity period relative to the validity period of the issuing certificate However it does state https datatracker ietf org doc html rfc5280 section 4 1 2 5 that certificates ought to be revoked if their status cannot be maintained up to their notAfter date This is why Vault 1 11 s pki issuer issuer ref configuration endpoint maintains the leaf not after behavior per issuer rather than per role Additionally some browsers will place ultimate trust in the certificates in their trust stores even when these certificates are expired Note that this only applies to certificates in the trust store validity periods will still be enforced for certificates not in the store such as intermediates The Issuer https datatracker ietf org doc html rfc5280 section 4 1 2 4 and signatureValue https datatracker ietf org doc html rfc5280 section 4 1 1 3 of this certificate In the issued certificate s Issuer field the issuing certificate places its own Subject value This allows the issuer to be identified later without having to try signature validation against every known local certificate when validating the presented certificate and chain The signature over the entire certificate by the issuer s private key is then placed in the signatureValue field The optional Authority Key Identifier https datatracker ietf org doc html rfc5280 section 4 2 1 1 field This field can contain either or both of two values The hash of the issuer s public key This extension is set and this value is filled in by Vault The Issuer s Subject and Serial Number This value is not set by Vault The latter is a dangerous restriction for the purposes of rotation it prevents cross signing and reissuance as the new issuing certificates while having the same backing key material will have different serial numbers See the Limitations of Primitives limitations of primitives section below for more information on this restriction The Serial Number https datatracker ietf org doc html rfc5280 section 4 1 2 2 of this certificate This field is unique to a specific issuer when a certificate is reissued by its parent authority it will always have a different serial number field The CRL distribution https datatracker ietf org doc html rfc5280 section 4 2 1 13 point field This is a field detailing where a CRL is expected to exist for this certificate and under which CRL issuers defaulting to the issuing certificate itself the CRL is expected to be signed by This is mostly informational and for server software like nginx Vault s Cert Auth method and Apache CRLs are provided to the server rather than having the server fetch CRLs for certificates automatically Note that root certificates in browsers trust stores are generally not considered revocable However if an intermediate is revoked by serial it will appear on its parent s CRL and may prevent rotation from happening X 509 rotation primitives Rotation from an organizational standpoint can only safely happen with certain intermediate X 509 certificates being issued To distinguish the two types of certificates used to achieve rotation this document notates them as primitives Rotation of an end entity certificate is trivial from an X 509 trust chain perspective this process happens every day and should only depend on what is in the trust store and not the end entity certificate itself In Vault the requester would hit the various issuance endpoints pki issue name or pki sign name or use the unsafe pki sign verbatim and swap out the old certificate with the new certificate and reload the configuration or restart the service Other parts of the organizations might use ACME https datatracker ietf org doc html rfc8555 for certificate issuance and rotation especially if the service is public facing and thus needs to be issued by a Public CA Given it was signed by a trusted root any devices connecting to the service would not know the difference Rotation of intermediate certificates is almost as easy Assuming a decent operational setup wherein during end entity issuance the full certificate chain is updated in the service s configuration this should be as easy as creating a new intermediate CA signing it against the root CA and then beginning issuance against the new intermediate certificate In Vault if the intermediate is generated in an existing mount path or is moved into such the requesting entity shouldn t care much Under ACME Let s Encrypt has successfully rotated intermediates to present a cross signed chain for older Android devices https letsencrypt org 2020 12 21 extending android compatibility html Assuming the old intermediate s parent s are still valid and trusted certificates issued under old intermediates should continue to validate The hard part of rotation calling for the use of these primitives is rotating root certificates These live in every device s trust store and are hard to update from an organization wide operational perspective Unless the organization can swap out roots almost instantaneously and simultaneously e g via an agent with no missed devices this process will likely span months To make this process lower risk there are various primitive certificate types that use the above certificate fields x 509 certificate fields Key to their success is the following note Note While certificates are added to the trust store it is ultimately the associated key material that determines trust two issuer certificates with the same subject but different public keys cannot validate the same leaf certificate only if the keys are the same can this occur Cross Signed primitive This is the most common type of rotation primitive A common CSR is signed by two CAs resulting in two certificates These certificates must have the same Subject but may have different Issuers and will have different Serial Numbers and the same backing key material to allow certificates they sign to be trusted by either variant Note that due to restrictions in how end entity certificates are used and validated services and validation libraries expect only one cross signing most typically only applies to intermediate A note on Cross Signed roots Technically cross signing can occur between two roots allowing trust bundles with either root to validate certs issued through the other However this process creates a certificate that is effectively an intermediate as it is no longer self signed and usually must be served alongside the trust chain Given this restriction it s preferable to instead cross sign the top level intermediates under the root unless strictly necessary when the old root certificate has been used to directly issue leaf certificates So the rest of this process flow assumes an intermediate is being cross signed as this is more common Process flow generate key pair generate CSR generate CSR signed by signed by root A root B Here a key pair was generated at some point in time Two CSRs are created and sent to two different root authorities Root A and Root B These result in two separate certificates potentially with different validity periods with the same Subject and same backing key material Note that this cross signing need not happen simultaneously there could be a gap of several years between the first and second certificate Additionally there s no limit on the number of cross signed duplicate used loosely with the same subject and key material certificates this could be cross signed by many different root certificates if necessary and desired Certificate hierarchy root A root B intermediate C same key material intermediate D leaf certificates The above process results in two trust paths either of root A or root B or both could exist in the client s trust stores and the leaf certificate would validate correctly Because the same key material is used for both intermediate certificates C and D the issued leaf certificate s signature field would be the same regardless of which intermediate was contacted Cross signing is thus a unifying primitive two separate trust paths now join into a single one by having leaf certificate s issuer field to point to two separate paths via duplication of the certificate in the chain and would be conditionally validated based on which root is present in the trust store This construct is documented and used in several places https letsencrypt org certificates https scotthelme co uk cross signing alternate trust paths how they work https security stackexchange com questions 14043 what is the use of cross signing certificates in x 509 Execution in Vault To create a cross signed certificate in Vault use the intermediate cross sign endpoint vault api docs secret pki generate intermediate csr Here when creating a cross signature to all cert B to be validated by cert A provide the values key ref all Subject parts c for cert B during intermediate generation Then sign this CSR using the issuer issuer ref sign intermediate endpoint vault api docs secret pki sign intermediate with cert A s reference and provide necessary values from cert B e g Subject parts cert A may live outside Vault Finally import the cross signed certificate into Vault using the issuers import cert endpoint vault api docs secret pki import ca certificates and keys If this process succeeded and both cert A and cert B and their key material lives in Vault the newly imported cross signed certificate will have a ca chain response field during read vault api docs secret pki read issuer containing cert A and cert B s ca chain will contain the cross signed cert and its ca chain value Note Regardless of issuer type is important to provide all relevant parameters as they were originally Vault does not infer e g the Subject name parameters from the existing issuer it merely reuses the same key material Notes on manual chain If an intermediate is cross signed and imported into the same mount as its pair Vault will not detect the cross signed pairs during automatic chain building As a result leaf issuance will have a chain that only includes one of these pairs of chains This is because the leaf issuance s ca chain parameter copies the value from signing issuer directly rather than computing its own copy of the chain To fix this update the manual chain field on the issuers vault api docs secret pki update issuer to include the chains of both pairs For instance given intA signed by rootA and intB signed by rootB as its cross signed version one could do the following vault patch pki issuer intA manual chain self rootA intB rootB vault patch pki issuer intB manual chain self rootB intA rootA This will ensure that issuance with either copy of the intermediate reports the full cross signed chain when signing leaf certs Reissuance primitive The second most common type of rotation primitive In this scheme the existing key material is used to generate a new certificate usually at a much later point in time from the existing issuance While similar to the cross signed primitive this one differs in that usually the reissuance happens after the original certificate expires or is close to expiration and is reissued by the original root CA In the event of a self signed certificate e g a root certificate this parent certificate would be itself In both cases this changes the contents of the certificate due to the new serial number but allows all existing leaf signatures to still validate Unlike the cross signed primitive this primitive type can be used on all types of certificates including leaves intermediates and roots Process flow generate key pair generate CSR generate CSR signed by issuer signed by issuer In this process flow a single key pair is generated at some point in time and stored The CSR with same requested fields is generated from this common key material and signed by the same issuer at multiple points in time preserving all critical fields Subject Issuer c While there is strictly no limit on the number of times a key can be reissued at some point safety would dictate the key material should be rotated instead of being continually reissued Certificate hierarchy root original cert same key material reissued cert leaf certificates Note that while this again results in two trust paths depending on which intermediate certificate is presented and is still valid only a root need be trusted When a reissued certificate is a root certificate the issuance link is simply self loop But in this case note that both certificates are technically valid issuers of each other This means it should be possible to provide a reissued root certificate in the TLS certificate chain and have it chain back to an existing root certificate in a trust store This primitive type is thus an incrementing primitive the life cycle of an existing key is extended into the future by issuing a new certificate with the same key material from the existing authority Execution in Vault To create a reissued root certificate in Vault use issuers generate root existing endpoint vault api docs secret pki generate root This allows the generation of a new root certificate with the existing key material via the key ref request parameter If this process succeeded when reading the issuer vault api docs secret pki read issuer via GET issuer issuer ref both issuers old and reissued will appear in each others ca chain response field unless prevented so by a manual chain value To create a reissued intermediate certificate in Vault this is a three step process 1 Use the issuers generate intermediate existing endpoint vault api docs secret pki generate intermediate csr to generate a new CSR with the existing key material with the key ref request parameter 2 Sign this CSR via the same signing process under the same issuer This step is specific to the parent CA which may or may not be Vault 3 Finally use the intermediate set signed endpoint vault api docs secret pki import ca certificates and keys to import the signed certificate from step 2 If the process to reissue an intermediate certificate succeeded when reading the issuer vault api docs secret pki read issuer via GET issuer issuer ref both issuers old and reissued will have the same ca chain response field except for the first entry unless prevented so by a manual chain value Note Regardless of issuer type is important to provide all relevant parameters as they were originally Vault does not infer e g the Subject name parameters from the existing issuer it merely reuses the same key material Temporal primitives We can use the above primitive types to rotate roots and intermediates to new keys and extend their lifetimes This time based rotation is what ultimately allows us to rotate root certificates There s two main variants of this a forward primitive wherein an old certificate is used to bless new key material and a backwards primitive wherein a new certificate is used to bless old key material Both of these primitives are independently used by Let s Encrypt in the aforementioned chain of trust document The link from DST Root CA X3 to ISRG Root X1 is an example of a forward primitive The link from ISRG Root X1 to R3 which was originally signed by DST Root CA X3 is an example of a backwards primitive For most organizations with a hierarchical structured CA setup cross signing all intermediates with both the new and old root CAs is sufficient for root rotation However for organizations which have directly issued leaf certificates from a root the old root will need to be reissued under the new root with shorter duration to allow these certificates to continue to validate This combines both of the above primitives cross signing and reissuance into a single backwards primitive step In the future these organizations should probably move to a more standard hierarchical setup Limitations of primitives The certificate s Authority Key Identifier https datatracker ietf org doc html rfc5280 section 4 2 1 1 extension field may contain either or both of the issuer s keyIdentifier a hash of the public key or both the issuer s Subject and Serial Number fields Generating certificates with the latter enabled luckily not possible in Vault especially so since Vault uses strictly random serial numbers prevents building a proper cross signed chain without re issuing the same serial number which will not work with most browsers trust stores and validation engines due to caching of certificates https support mozilla org en US kb Certificate contains the same serial number as another certificate used in successful validations In the strictest sense when using a cross signing primitive from a different CA the intermediate could be reissued with the same serial number assuming no previous certificate was issued by that CA with that serial This does not work when using a reissuance primitive as these are technically the same authority and thus this authority must issue certificates with unique serial numbers Suggested root rotation procedure The following is a suggested process for achieving root rotation easily and without outage impact to the broader organization assuming best practices vault docs secrets pki considerations use a ca hierarchy are being followed Some adaption will be necessary Note that this process takes time How much time is dependent on the automation level and operational awareness of the organization 1 Generate vault api docs secret pki generate root the new root certificate For clarity it is suggested to use a new common name to distinguish it from the old root certificate Key material need not be the same 2 Cross sign cross signed primitive all existing intermediates It is important to update the manual chain on the issuers as discussed in that section as we assume servers are configured to combine the certificate field with the ca chain field on renewal and issuance thus getting the cross signed intermediates 3 Encourage rotation to pickup the new cross signed intermediates With short lived certificates this should happen automatically vault docs secrets pki considerations automate leaf certificate renewal However for some long lived certs it is suggested to rotate them manually and proactively This step takes time and depends on the types of certificates issued e g server certs code signing or client auth 4 Once all chains have been updated new systems can be brought online with only the new root certificate and connect to all existing systems 5 Existing systems can now be migrated with a one shot root switch the new root can be added and the old root can be removed at the same time Assuming the above step 3 can be achieved in a reasonable amount of time this decreases the time it takes to move the majority of systems over to fully using the new root and no longer trusting the old root This step also takes time depending on how quickly the organization can migrate roots and ensure all such systems are migrated If some systems are offline and only infrequently online or if they have hard coded certificate stores and need to reach obsolescence first the organization might not be ready to move on to future steps 6 At this point since all systems now use the new root it is safe to remove or archive the old root and intermediates updating the manual chain to point strictly to the new intermediate root At this point rotation is fully completed Tutorial Refer to the Build Your Own Certificate Authority CA vault tutorials secrets management pki engine guide for a step by step tutorial Have a look at the PKI Secrets Engine with Managed Keys vault tutorials enterprise managed key pki for more about how to use externally managed keys with PKI API The PKI secrets engine has a full HTTP API Please see the PKI secrets engine API vault api docs secret pki for more details |
vault page title Enrollment over Secure Transport EST within Vault PKI Secrets Engines This document covers configuration and limitations of Vault s PKI Secrets Engine An overview of the Enrollment over Secure Transport protocol implementation within Vault implementation of the EST protocol https datatracker ietf org doc html rfc7030 EnterpriseAlert inline true layout docs PKI secrets engine Enrollment over Secure Transport EST EnterpriseAlert inline true | ---
layout: docs
page_title: Enrollment over Secure Transport (EST) within Vault | PKI - Secrets Engines
description: An overview of the Enrollment over Secure Transport protocol implementation within Vault.
---
# PKI secrets engine - Enrollment over Secure Transport (EST) <EnterpriseAlert inline="true" />
This document covers configuration and limitations of Vault's PKI Secrets Engine
implementation of the [EST protocol](https://datatracker.ietf.org/doc/html/rfc7030) <EnterpriseAlert inline="true" />.
## What is Enrollment over Secure Transport (EST)?
The EST protocol is an IETF standardized protocol, [RFC 7030](https://datatracker.ietf.org/doc/html/rfc7030),
that allows clients to acquire client certificates and associated
Certificate Authority (CA) certificates.
## Enabling EST support on a Vault PKI mount
The following is a list of steps required to configure an existing PKI
mount to serve EST clients. Each of which can be broken down within three main
categories.
1. [Authentication mechanisms](#configuring-est-authentication)
1. [Updating PKI tunable parameters](#updating-the-pki-mount-tunable-parameters)
1. [PKI EST configuration](#pki-est-configuration)
### Configuring EST Authentication
The EST protocol specifies a few different authentication mechanisms, of which
Vault supports two.
1. [HTTP-Based Client authentication](/vault/docs/auth/userpass)
1. [Certificate TLS authentication](/vault/docs/auth/cert)
Both of these authentication mechanisms leverage a separate Vault authentication
mount, within the same namespace, to validate the client provided credentials
along with the client's ACL policy to enforce. While both authentication
schemes can be enabled at once, only a single mount will be used to authenticate
a client based on the way credentials were provided through EST. If an EST client sends
HTTP-Based authentication credentials, they will be preferred over TLS client
certificates.
For proper accounting, mounts supporting EST authentication should be
dedicated to this purpose, not shared with other workflows. In other words,
create a new auth mount for EST even if you already have one of the same
type for other purposes.
When setting up the authentication mount for EST clients, the token type must
be configured to return [batch tokens](/vault/docs/concepts/tokens#batch-tokens).
Batch tokens are required to avoid an excessive amount of leases being generated
and persisted as every EST incoming request needs to be authenticated.
The path within an ACL policy, must match the internal redirected path including
the mount and not the `.well-known/est/` URI the client is initially using.
The path to use within the plugin depends on the path policy that is in configured
for the EST label being used by the client.
If using the `sign-verbatim` as a path policy, the following
ACL policy will allow an authenticated client access the required PKI EST paths.
```
path “pki/est/simpleenroll” {
capabilities=[“update”, “create”]
}
path “pki/est/simplereenroll” {
capabilities=[“update”, “create”]
}
```
For a role base path policy, this sample policy can be used
```
path “pki/roles/my-role-name/est/simpleenroll” {
capabilities=[“update”, “create”]
}
path “pki/roles/my-role-name/est/simplereenroll” {
capabilities=[“update”, “create”]
}
```
#### Updating the PKI mount tunable parameters
Once the authentication mount has been created and configured, the authentication mount's accessor
will need to be captured and added within the PKI mount's [delegated auth accessors](/vault/api-docs/system/mounts#delegated_auth_accessors).
To get an authentication mount's accessor field, the following command can be used.
```shell-session
$ vault read -field=accessor sys/auth/auth/userpass
```
For EST to work within certain clients, a few response headers need to be explicitly allowed
along with configuring the list of accessors the mount can delegate authentication towards.
The following will grant the required response headers, you will need to replace the values for the `delegated-auth-accessors`
to match your values.
```shell-session
$ vault secrets tune \
-allowed-response-headers="Content-Transfer-Encoding" \
-allowed-response-headers="Content-Length" \
-allowed-response-headers="WWW-Authenticate" \
-delegated-auth-accessors="auth_userpass_e2f4f6d5" \
-delegated-auth-accessors="auth_cert_4088ac2d" \
pki
```
#### PKI EST configuration
The EST protocol specifies that an EST server must support a URI path-prefix of
`.well-known/est/` as defined in [RFC-5785](https://datatracker.ietf.org/doc/html/rfc5785).
EST client's normally don't provide any sort of configuration for different path-prefixes, and will
default to hitting the host on the path `https://<hostname>:<port>/.well-known/est/`.
Some clients allow a single label, sometimes referred to as `additional path segment`,
to accommodate different issuers. This label will be added to the path after
the est path such as `https://<hostname>:<port>/.well-known/est/<label>/`.
To provide different restrictions around usage (defaults, an issuer or role),
for EST protocol endpoints, a path policy is associated with the EST
label.
@include 'pki-est-default-policy.mdx'
Within the Vault [EST configuration API](/vault/api-docs/secret/pki/issuance#set-est-configuration), a PKI
mount can be specified as the default mount by enabling [default_mount](/vault/api-docs/secret/pki/issuance#default_mount)
to true, or provide a mapping of a label within [label_to_path_policy](/vault/api-docs/secret/pki/issuance#label_to_path_policy)
As an example of a complete EST configuration, that would enable the pki mount
to register the .well-known/est default label, along with two additional labels
of test-label and sign-all.
The test-label would use the existing est-clients PKI role for restrictions and defaults,
leveraging the issuer specified within the role. The other two labels, default and sign-all will
leverage a sign-verbatim type role, allowing any identifier to be issued using the default
issuer.
```shell-session
vault write pki/config/est -<<EOC
{
"enabled": true,
"default_mount": true,
"default_path_policy": "sign-verbatim",
"label_to_path_policy": {
"test-label": "role:est-clients",
"sign-all": "sign-verbatim"
},
"authenticators": {
"cert": {
"accessor": "auth_cert_4088ac2d"
},
"userpass": {
"accessor": "auth_userpass_e2f4f6d5"
}
}
}
EOC
```
## Limitations
### EST API Support
The initial implementation covers solely the required API endpoints of the EST protocol.
The following optional features from the specification are not currently supported.
- [Full CMC](https://datatracker.ietf.org/doc/html/rfc7030#section-4.3)
- [Server-side key generation](https://datatracker.ietf.org/doc/html/rfc7030#section-4.4)
- [CSR attribute endpoints](https://datatracker.ietf.org/doc/html/rfc7030#section-4.5)
### Well Known redirections
The EST configuration parameters `default_mount` and/or `label_to_path_policy` can be used to register
paths within the .well-known path space. The following limitations apply:
- Only a single PKI mount, across all namespaces, can be enabled as the `default_mount`.
- Labels within `label_to_path_policy` must also be unique across all PKI mounts regardless of namespace.
- Care must be taken if enabling EST on a [local](/vault/docs/commands/secrets/enable#local) PKI mount on
performance secondary clusters. Vault cannot guarantee the configured EST labels do
not conflict across different PKI mounts in this use-case. This can lead to
different issuers being used across clusters for the same EST labels.
## API
The PKI secrets engine has a full HTTP API. Please see the
[PKI secrets engine API](/vault/api-docs/secret/pki) for more details. | vault | layout docs page title Enrollment over Secure Transport EST within Vault PKI Secrets Engines description An overview of the Enrollment over Secure Transport protocol implementation within Vault PKI secrets engine Enrollment over Secure Transport EST EnterpriseAlert inline true This document covers configuration and limitations of Vault s PKI Secrets Engine implementation of the EST protocol https datatracker ietf org doc html rfc7030 EnterpriseAlert inline true What is Enrollment over Secure Transport EST The EST protocol is an IETF standardized protocol RFC 7030 https datatracker ietf org doc html rfc7030 that allows clients to acquire client certificates and associated Certificate Authority CA certificates Enabling EST support on a Vault PKI mount The following is a list of steps required to configure an existing PKI mount to serve EST clients Each of which can be broken down within three main categories 1 Authentication mechanisms configuring est authentication 1 Updating PKI tunable parameters updating the pki mount tunable parameters 1 PKI EST configuration pki est configuration Configuring EST Authentication The EST protocol specifies a few different authentication mechanisms of which Vault supports two 1 HTTP Based Client authentication vault docs auth userpass 1 Certificate TLS authentication vault docs auth cert Both of these authentication mechanisms leverage a separate Vault authentication mount within the same namespace to validate the client provided credentials along with the client s ACL policy to enforce While both authentication schemes can be enabled at once only a single mount will be used to authenticate a client based on the way credentials were provided through EST If an EST client sends HTTP Based authentication credentials they will be preferred over TLS client certificates For proper accounting mounts supporting EST authentication should be dedicated to this purpose not shared with other workflows In other words create a new auth mount for EST even if you already have one of the same type for other purposes When setting up the authentication mount for EST clients the token type must be configured to return batch tokens vault docs concepts tokens batch tokens Batch tokens are required to avoid an excessive amount of leases being generated and persisted as every EST incoming request needs to be authenticated The path within an ACL policy must match the internal redirected path including the mount and not the well known est URI the client is initially using The path to use within the plugin depends on the path policy that is in configured for the EST label being used by the client If using the sign verbatim as a path policy the following ACL policy will allow an authenticated client access the required PKI EST paths path pki est simpleenroll capabilities update create path pki est simplereenroll capabilities update create For a role base path policy this sample policy can be used path pki roles my role name est simpleenroll capabilities update create path pki roles my role name est simplereenroll capabilities update create Updating the PKI mount tunable parameters Once the authentication mount has been created and configured the authentication mount s accessor will need to be captured and added within the PKI mount s delegated auth accessors vault api docs system mounts delegated auth accessors To get an authentication mount s accessor field the following command can be used shell session vault read field accessor sys auth auth userpass For EST to work within certain clients a few response headers need to be explicitly allowed along with configuring the list of accessors the mount can delegate authentication towards The following will grant the required response headers you will need to replace the values for the delegated auth accessors to match your values shell session vault secrets tune allowed response headers Content Transfer Encoding allowed response headers Content Length allowed response headers WWW Authenticate delegated auth accessors auth userpass e2f4f6d5 delegated auth accessors auth cert 4088ac2d pki PKI EST configuration The EST protocol specifies that an EST server must support a URI path prefix of well known est as defined in RFC 5785 https datatracker ietf org doc html rfc5785 EST client s normally don t provide any sort of configuration for different path prefixes and will default to hitting the host on the path https hostname port well known est Some clients allow a single label sometimes referred to as additional path segment to accommodate different issuers This label will be added to the path after the est path such as https hostname port well known est label To provide different restrictions around usage defaults an issuer or role for EST protocol endpoints a path policy is associated with the EST label include pki est default policy mdx Within the Vault EST configuration API vault api docs secret pki issuance set est configuration a PKI mount can be specified as the default mount by enabling default mount vault api docs secret pki issuance default mount to true or provide a mapping of a label within label to path policy vault api docs secret pki issuance label to path policy As an example of a complete EST configuration that would enable the pki mount to register the well known est default label along with two additional labels of test label and sign all The test label would use the existing est clients PKI role for restrictions and defaults leveraging the issuer specified within the role The other two labels default and sign all will leverage a sign verbatim type role allowing any identifier to be issued using the default issuer shell session vault write pki config est EOC enabled true default mount true default path policy sign verbatim label to path policy test label role est clients sign all sign verbatim authenticators cert accessor auth cert 4088ac2d userpass accessor auth userpass e2f4f6d5 EOC Limitations EST API Support The initial implementation covers solely the required API endpoints of the EST protocol The following optional features from the specification are not currently supported Full CMC https datatracker ietf org doc html rfc7030 section 4 3 Server side key generation https datatracker ietf org doc html rfc7030 section 4 4 CSR attribute endpoints https datatracker ietf org doc html rfc7030 section 4 5 Well Known redirections The EST configuration parameters default mount and or label to path policy can be used to register paths within the well known path space The following limitations apply Only a single PKI mount across all namespaces can be enabled as the default mount Labels within label to path policy must also be unique across all PKI mounts regardless of namespace Care must be taken if enabling EST on a local vault docs commands secrets enable local PKI mount on performance secondary clusters Vault cannot guarantee the configured EST labels do not conflict across different PKI mounts in this use case This can lead to different issuers being used across clusters for the same EST labels API The PKI secrets engine has a full HTTP API Please see the PKI secrets engine API vault api docs secret pki for more details |
vault Troubleshoot PKI Secrets Engine and ACME Secrets Engine s ACME server Solve common problems related to ACME client integration with Vault PKI page title PKI Secrets Engine Troubleshooting ACME layout docs Troubleshoot problems with ACME clients and Vault PKI Secrets Engine s ACME server | ---
layout: docs
page_title: 'PKI - Secrets Engine: Troubleshooting ACME'
description: Troubleshoot problems with ACME clients and Vault PKI Secrets Engine's ACME server.
---
# Troubleshoot PKI Secrets Engine and ACME
Solve common problems related to ACME client integration with Vault PKI
Secrets Engine's ACME server.
## Error: ACME feature requires local cluster 'path' field configuration to be set
If ACME works on some nodes of a Vault Enterprise cluster but not on
others, it likely means that the cluster address has not been set.
### Symptoms
When a Vault client reads the ACME config (`/config/acme`) on a
Performance Secondary nodes or when an ACME client attempts to connect to a
directory on this node, it will error with:
> ACME feature requires local cluster 'path' field configuration to be set
### Cause
In most cases, cluster path errors mean that the required cluster address is
not set in your cluster configuration parameter.
### Resolution
For each Performance Replication cluster, read the value of `/config/cluster`
and ensure the `path` field is set. When it is missing, update the URL to
point to this mount's path on a TLS-enabled address for this PR cluster; this
domain may be a load balanced or a DNS round robin address. For example:
```
$ vault write pki/config/cluster path=https://cluster-b.vault.example.com/v1/pki
```
Once this is done, re-read the ACME configuration and make sure no warnings
appear:
```
$ vault read pki/config/acme
```
## Error: Unable to register an account with the ACME server
### Symptoms
When registering a new account without an [External Account Binding
(EAB)](/vault/api-docs/secret/pki#acme-external-account-bindings), the
Vault Server rejects the request with a response like:
> Unable to register an account with ACME server
with further information provided in the debug logs (in the case of
`certbot`):
> Server requires external account binding.
or, if the client incorrectly contacted the server, an error like:
> The request must include a value for the 'externalAccountBinding' field
In either case, a new account needs to be created with an EAB token created
by Vault.
### Cause
If a server has been updated to require `eab_policy=always-required` in the
[ACME configuration](/vault/api-docs/secret/pki#set-acme-configuration),
new account registration (and reuse of existing accounts will fail).
### Resolution
Using a Vault token, [fetch a new external account
binding](/vault/api-docs/secret/pki#get-acme-eab-binding-token) for
the [desired directory](/vault/api-docs/v1.14.x/secret/pki#acme-directories):
```
$ vault write -f pki/roles/my-role-name/acme/new-eab
...
directory roles/my-role-name/acme/directory
id bc8088d9-3816-5177-ae8e-d8393265f7dd
key MHcCAQE... additional data elided ...
...
```
Then pass this new EAB token into the ACME client. For example, with
`certbot`:
```
$ certbot [... additional parameters ...] \
--server https://cluster-b.vault.example.com/v1/pki/roles/my-role-name/acme/directory \
--eab-kid bc8088d9-3816-5177-ae8e-d8393265f7dd \
--eab-hmac-key MHcCAQE... additional data elided ...
```
Ensure that the ACME directory passed to the ACME client matches that
fetched from the Vault.
## Error: Failed to verify eab
### Symptoms
When initializing a new account against this Vault server, the ACME client
might error with a message like:
> The client lacks sufficient authorization :: failed to verify eab
This is caused by requesting an EAB from a directory not matching the
one the client used.
### Cause
If an EAB account token is incorrectly used with the wrong directory, the
ACME server will reject the request with an error about insufficient
permissions.
### Resolution
Ensure the requested EAB token matches the directory. For a given directory
at `/some/path/acme/directory`, fetch EAB tokens from
`/some/path/acme/new-eab`. The remaining resolution steps are the same as
for [debugging account registration
failures](#debugging-account-registration-failures).
## Error: ACME validation failed for `{challenge_id}`
### Symptoms
When viewing the Vault server logs or attempting to fetch a certificate via
an ACME client, an error like:
> ACME validation failed for a465a798-4400-6c17-6735-e1b38c23de38-tls-alpn-01: ...
indicates that the server was unable to validate this challenge accepted
by the client.
### Cause
Vault can not verify the server's identity through the client's requested
[challenge type](/vault/api-docs/secret/pki#acme-challenge-types) (`dns-01`,
`http-01`, or `tls-alpn-01`). Vault will not issue the certificate requested
by the client.
### Resolution
Ensure that DNS is configured correctly from the Vault server's perspective,
including setting [any custom DNS resolver](/vault/api-docs/secret/pki#dns_resolver).
Ensure that any firewalls are set up to allow Vault to talk to the relevant
systems (the DNS server in the case of `dns-01`, port 80 on the target
machine for `http-01`, or port 443 on the target machine for `tls-alpn-01`
challenges).
## Error: The client lacks sufficient authorization: account in status: revoked
### Symptoms
When attempting to renew a certificate, the ACME client reports an error:
> The client lacks sufficient authorization: account in status: revoked
### Cause
If you run a [manual tidy](/vault/api-docs/secret/pki#tidy_acme) or have
[auto-tidy](/vault/api-docs/secret/pki#configure-automatic-tidy) enabled
with `tidy_acme=true, Vault will periodically remove stale ACME accounts.
Connections from clients using removed accounts will be rejected.
### Resolution
Refer to the ACME client's documentation for removing cached local
configuration and setup a new account, specifying any EABs as required.
## Get help
Please provide the following information when contacting Hashicorp Support
or filing a GitHub issue to help with our investigation and reproducibility:
- ACME client name and version
- ACME client logs and/or output
- Vault server **DEBUG** level logs
## Tutorial
Refer to the [Build Your Own Certificate Authority (CA)](/vault/tutorials/secrets-management/pki-engine)
guide for a step-by-step tutorial.
Have a look at the [PKI Secrets Engine with Managed Keys](/vault/tutorials/enterprise/managed-key-pki)
for more about how to use externally managed keys with PKI.
## API
The PKI secrets engine has a full HTTP API. Please see the
[PKI secrets engine API](/vault/api-docs/secret/pki) for more
details. | vault | layout docs page title PKI Secrets Engine Troubleshooting ACME description Troubleshoot problems with ACME clients and Vault PKI Secrets Engine s ACME server Troubleshoot PKI Secrets Engine and ACME Solve common problems related to ACME client integration with Vault PKI Secrets Engine s ACME server Error ACME feature requires local cluster path field configuration to be set If ACME works on some nodes of a Vault Enterprise cluster but not on others it likely means that the cluster address has not been set Symptoms When a Vault client reads the ACME config config acme on a Performance Secondary nodes or when an ACME client attempts to connect to a directory on this node it will error with ACME feature requires local cluster path field configuration to be set Cause In most cases cluster path errors mean that the required cluster address is not set in your cluster configuration parameter Resolution For each Performance Replication cluster read the value of config cluster and ensure the path field is set When it is missing update the URL to point to this mount s path on a TLS enabled address for this PR cluster this domain may be a load balanced or a DNS round robin address For example vault write pki config cluster path https cluster b vault example com v1 pki Once this is done re read the ACME configuration and make sure no warnings appear vault read pki config acme Error Unable to register an account with the ACME server Symptoms When registering a new account without an External Account Binding EAB vault api docs secret pki acme external account bindings the Vault Server rejects the request with a response like Unable to register an account with ACME server with further information provided in the debug logs in the case of certbot Server requires external account binding or if the client incorrectly contacted the server an error like The request must include a value for the externalAccountBinding field In either case a new account needs to be created with an EAB token created by Vault Cause If a server has been updated to require eab policy always required in the ACME configuration vault api docs secret pki set acme configuration new account registration and reuse of existing accounts will fail Resolution Using a Vault token fetch a new external account binding vault api docs secret pki get acme eab binding token for the desired directory vault api docs v1 14 x secret pki acme directories vault write f pki roles my role name acme new eab directory roles my role name acme directory id bc8088d9 3816 5177 ae8e d8393265f7dd key MHcCAQE additional data elided Then pass this new EAB token into the ACME client For example with certbot certbot additional parameters server https cluster b vault example com v1 pki roles my role name acme directory eab kid bc8088d9 3816 5177 ae8e d8393265f7dd eab hmac key MHcCAQE additional data elided Ensure that the ACME directory passed to the ACME client matches that fetched from the Vault Error Failed to verify eab Symptoms When initializing a new account against this Vault server the ACME client might error with a message like The client lacks sufficient authorization failed to verify eab This is caused by requesting an EAB from a directory not matching the one the client used Cause If an EAB account token is incorrectly used with the wrong directory the ACME server will reject the request with an error about insufficient permissions Resolution Ensure the requested EAB token matches the directory For a given directory at some path acme directory fetch EAB tokens from some path acme new eab The remaining resolution steps are the same as for debugging account registration failures debugging account registration failures Error ACME validation failed for challenge id Symptoms When viewing the Vault server logs or attempting to fetch a certificate via an ACME client an error like ACME validation failed for a465a798 4400 6c17 6735 e1b38c23de38 tls alpn 01 indicates that the server was unable to validate this challenge accepted by the client Cause Vault can not verify the server s identity through the client s requested challenge type vault api docs secret pki acme challenge types dns 01 http 01 or tls alpn 01 Vault will not issue the certificate requested by the client Resolution Ensure that DNS is configured correctly from the Vault server s perspective including setting any custom DNS resolver vault api docs secret pki dns resolver Ensure that any firewalls are set up to allow Vault to talk to the relevant systems the DNS server in the case of dns 01 port 80 on the target machine for http 01 or port 443 on the target machine for tls alpn 01 challenges Error The client lacks sufficient authorization account in status revoked Symptoms When attempting to renew a certificate the ACME client reports an error The client lacks sufficient authorization account in status revoked Cause If you run a manual tidy vault api docs secret pki tidy acme or have auto tidy vault api docs secret pki configure automatic tidy enabled with tidy acme true Vault will periodically remove stale ACME accounts Connections from clients using removed accounts will be rejected Resolution Refer to the ACME client s documentation for removing cached local configuration and setup a new account specifying any EABs as required Get help Please provide the following information when contacting Hashicorp Support or filing a GitHub issue to help with our investigation and reproducibility ACME client name and version ACME client logs and or output Vault server DEBUG level logs Tutorial Refer to the Build Your Own Certificate Authority CA vault tutorials secrets management pki engine guide for a step by step tutorial Have a look at the PKI Secrets Engine with Managed Keys vault tutorials enterprise managed key pki for more about how to use externally managed keys with PKI API The PKI secrets engine has a full HTTP API Please see the PKI secrets engine API vault api docs secret pki for more details |
vault The PKI secrets engine for Vault generates TLS certificates Secrets Engine layout docs PKI secrets engine setup and usage This document provides a brief overview of the setup and usage of the PKI page title PKI Secrets Engines Setup and Usage | ---
layout: docs
page_title: 'PKI - Secrets Engines: Setup and Usage'
description: The PKI secrets engine for Vault generates TLS certificates.
---
# PKI secrets engine - setup and usage
This document provides a brief overview of the setup and usage of the PKI
Secrets Engine.
## Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the PKI secrets engine:
```text
$ vault secrets enable pki
Success! Enabled the pki secrets engine at: pki/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Increase the TTL by tuning the secrets engine. The default value of 30 days may be too short, so increase it to 1 year:
```text
$ vault secrets tune -max-lease-ttl=8760h pki
Success! Tuned the secrets engine at: pki/
```
Note that individual roles can restrict this value to be shorter on a
per-certificate basis. This just configures the global maximum for this
secrets engine.
1. Configure a CA certificate and private key. Vault can accept an existing key
pair, or it can generate its own self-signed root. In general, we recommend
maintaining your root CA outside of Vault and providing Vault a signed
intermediate CA.
```text
$ vault write pki/root/generate/internal \
common_name=my-website.com \
ttl=8760h
Key Value
--- -----
certificate -----BEGIN CERTIFICATE-----...
expiration 1536807433
issuing_ca -----BEGIN CERTIFICATE-----...
serial_number 7c:f1:fb:2c:6e:4d:99:0e:82:1b:08:0a:81:ed:61:3e:1d:fa:f5:29
```
The returned certificate is purely informative. The private key is safely
stored internally in Vault.
1. Update the CRL location and issuing certificates. These values can be updated
in the future.
```text
$ vault write pki/config/urls \
issuing_certificates="http://127.0.0.1:8200/v1/pki/ca" \
crl_distribution_points="http://127.0.0.1:8200/v1/pki/crl"
Success! Data written to: pki/config/urls
```
1. Configure a role that maps a name in Vault to a procedure for generating a
certificate. When users or machines generate credentials, they are generated
against this role:
```text
$ vault write pki/roles/example-dot-com \
allowed_domains=my-website.com \
allow_subdomains=true \
max_ttl=72h
Success! Data written to: pki/roles/example-dot-com
```
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
1. Generate a new credential by writing to the `/issue` endpoint with the name
of the role:
```text
$ vault write pki/issue/example-dot-com \
common_name=www.my-website.com
Key Value
--- -----
certificate -----BEGIN CERTIFICATE-----...
issuing_ca -----BEGIN CERTIFICATE-----...
private_key -----BEGIN RSA PRIVATE KEY-----...
private_key_type rsa
serial_number 1d:2e:c6:06:45:18:60:0e:23:d6:c5:17:43:c0:fe:46:ed:d1:50:be
```
The output will include a dynamically generated private key and certificate
which corresponds to the given role and expires in 72h (as dictated by our
role definition). The issuing CA and trust chain is also returned for
automation simplicity.
## Tutorial
Refer to the [Build Your Own Certificate Authority (CA)](/vault/tutorials/secrets-management/pki-engine)
guide for a step-by-step tutorial.
Have a look at the [PKI Secrets Engine with Managed Keys](/vault/tutorials/enterprise/managed-key-pki)
for more about how to use externally managed keys with PKI.
## API
The PKI secrets engine has a full HTTP API. Please see the
[PKI secrets engine API](/vault/api-docs/secret/pki) for more
details. | vault | layout docs page title PKI Secrets Engines Setup and Usage description The PKI secrets engine for Vault generates TLS certificates PKI secrets engine setup and usage This document provides a brief overview of the setup and usage of the PKI Secrets Engine Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the PKI secrets engine text vault secrets enable pki Success Enabled the pki secrets engine at pki By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Increase the TTL by tuning the secrets engine The default value of 30 days may be too short so increase it to 1 year text vault secrets tune max lease ttl 8760h pki Success Tuned the secrets engine at pki Note that individual roles can restrict this value to be shorter on a per certificate basis This just configures the global maximum for this secrets engine 1 Configure a CA certificate and private key Vault can accept an existing key pair or it can generate its own self signed root In general we recommend maintaining your root CA outside of Vault and providing Vault a signed intermediate CA text vault write pki root generate internal common name my website com ttl 8760h Key Value certificate BEGIN CERTIFICATE expiration 1536807433 issuing ca BEGIN CERTIFICATE serial number 7c f1 fb 2c 6e 4d 99 0e 82 1b 08 0a 81 ed 61 3e 1d fa f5 29 The returned certificate is purely informative The private key is safely stored internally in Vault 1 Update the CRL location and issuing certificates These values can be updated in the future text vault write pki config urls issuing certificates http 127 0 0 1 8200 v1 pki ca crl distribution points http 127 0 0 1 8200 v1 pki crl Success Data written to pki config urls 1 Configure a role that maps a name in Vault to a procedure for generating a certificate When users or machines generate credentials they are generated against this role text vault write pki roles example dot com allowed domains my website com allow subdomains true max ttl 72h Success Data written to pki roles example dot com Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials 1 Generate a new credential by writing to the issue endpoint with the name of the role text vault write pki issue example dot com common name www my website com Key Value certificate BEGIN CERTIFICATE issuing ca BEGIN CERTIFICATE private key BEGIN RSA PRIVATE KEY private key type rsa serial number 1d 2e c6 06 45 18 60 0e 23 d6 c5 17 43 c0 fe 46 ed d1 50 be The output will include a dynamically generated private key and certificate which corresponds to the given role and expires in 72h as dictated by our role definition The issuing CA and trust chain is also returned for automation simplicity Tutorial Refer to the Build Your Own Certificate Authority CA vault tutorials secrets management pki engine guide for a step by step tutorial Have a look at the PKI Secrets Engine with Managed Keys vault tutorials enterprise managed key pki for more about how to use externally managed keys with PKI API The PKI secrets engine has a full HTTP API Please see the PKI secrets engine API vault api docs secret pki for more details |
vault Migrate Consul to Raft storage page title Migrate Consul to Raft storage This procedure assumes you have a Vault cluster deployed in a Kubernetes environment configured with Consul storage The storage migration can occur while leaving the Consul cluster intact A single change to the Consul cluster is a lock file written by Vault during the migration Guide to migration of Consul storage to Raft layout docs | ---
layout: docs
page_title: Migrate Consul to Raft storage
description: >-
Guide to migration of Consul storage to Raft.
---
# Migrate Consul to Raft storage
This procedure assumes you have a Vault cluster deployed in a Kubernetes environment configured with Consul storage. The storage migration can occur while leaving the Consul cluster intact. A single change to the Consul cluster is a lock file written by Vault during the migration.
This guide uses basic examples and default Vault configurations. It is for illustrative purposes, and adaption to specific configurations relevant to your environment is still required.
<Warning title="Back up data">
Always back up your data before attempting migration! Although this is an offline operation and the risk is low, it is advisable to take a recent snapshot from your Consul cluster before proceeding.
</Warning>
## Overview
This guide uses an intermediate Helm configuration to introduce an init container that will perform the storage migration, and then start a single Vault server using the Raft storage backend to verify the results. Then update the Helm configuration to remove the init container and start Vault replicas.
### Vault and Kubernetes setup
Consider the following `vault status` output and Helm Chart values for Vault:
<CodeBlockConfig hideClipboard>
```plaintext
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.14.8+ent
Build Date 2023-12-05T01:49:39Z
Storage Type consul
Cluster Name vault-cluster-68870bf8
Cluster ID cd18c692-f2e3-77a5-fba3-28f06f41f375
HA Enabled true
HA Cluster https://vault-0.vault-internal:8201
HA Mode active
Active Since 2024-04-10T02:45:33.367042122Z
Last WAL 52
```
</CodeBlockConfig>
Helm chart values:
<CodeBlockConfig hideClipboard>
```plaintext
global:
enabled: false
server:
enabled: true
image:
repository: hashicorp/vault-enterprise
tag: 1.14.8-ent
enterpriseLicense:
secretName: vault-license
secretKey: vault.hclic
ha:
enabled: true
replicas: 3
config: |
ui = true
service_registration "kubernetes" {}
listener "tcp" {
address = ":8200"
cluster_address = ":8201"
tls_disable = 1
}
storage "consul" {
path = "vault"
address = "http://HOST_IP:8500"
}
```
</CodeBlockConfig>
### Migration procedure
1. Uninstall Vault via Helm.
```shell-session
$ helm uninstall vault
```
Deployed `StatefulSets` cannot have certain attributes modified after their initial deployment. Therefore, the `StatefulSet` deployment must be entirely replaced.
Vault servers using Consul storage are by default stateless. Unless explicitly configured, the Vault server `StatefulSet` does not create any Persistent Volume Claims (PVC) or other artifacts. Vault's index holds its state, which is entirely stored in the Consul server `StatefulSet`'s persistent volumes.
<Warning title='Caution'>
It is strongly advised to review your Vault deployment configurations and take appropriate backups for any stateful information managed via Helm or other orchestration platforms.
</Warning>
1. Create a `ConfigMap` containing the Storage Migration configuration.
```shell-session
$ cat > vault-storage-migration-configmap.yml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/instance: vault
app.kubernetes.io/name: vault
name: storage-migration
namespace: default
data:
migrate.hcl: |-
storage_source "consul" {
address = "http://consul-server.default.svc.cluster.local:8500"
path = "vault/"
}
storage_destination "raft" {
path = "/vault/data"
}
cluster_addr = "https://vault-0.vault-internal:8201"
EOF
```
Often your Vault server should communicate to Consul via a Consul client agent. This example uses the service endpoint for a Consul server deployed in Kubernetes, although it can work for a Consul server cluster deployed outside of Kubernetes as well.
1. Apply the `ConfigMap`.
```shell-session
$ kubectl create -f vault-storage-migration-configmap.yml
```
1. Install Vault via Helm deployment with Raft Migration storage configuration.
```shell-session
$ cat > vault-migration-values.yml <<EOF
global:
enabled: false
server:
enabled: true
image:
repository: hashicorp/vault-enterprise
tag: 1.14.8-ent
enterpriseLicense:
secretName: vault-license
secretKey: vault.hclic
extraInitContainers:
- name: vault-storage-migration
image: hashicorp/vault-enterprise:1.14.8-ent
command:
- "/bin/sh"
- "-ec"
args:
- "/bin/vault operator migrate -config /vault/storage-migration/migrate.hcl"
volumeMounts:
- name: storage-migration
mountPath: "/vault/storage-migration"
- name: data
mountPath: "/vault/data"
volumeMounts:
- name: storage-migration
mountPath: "/vault/storage-migration"
volumes:
- name: storage-migration
configMap:
name: storage-migration
dataStorage:
enabled: true
size: "1Gi"
ha:
enabled: true
replicas: 1
raft:
enabled: true
config: |
ui = true
service_registration "kubernetes" {}
listener "tcp" {
address = ":8200"
cluster_address = ":8201"
tls_disable = 1
}
storage "raft" {
path = "/vault/data"
retry_join {
auto_join_scheme = "http"
auto_join = "provider=k8s"
}
}
EOF
```
**Configuration notes**
- `storage “raft”` configuration to specify the path for the Raft DB (`/vault/data` by default), and any `retry_join` parameters in your original configuration.
- This example uses `auto_join` to automatically find Raft peers via the Kubernetes API. See the [`retry_join`](/vault/docs/configuration/storage/raft#retry_join-stanza) for more information.
- `dataStorage` configuration in the Helm override values, to specify the parameters of the PVCs the Vault `StatefulSet` will create.
- `extraInitContainers` will start an init container mounting the storage migration ConfigMap and `data` volume, which it will then use to execute the storage migration.
- `replicas: 1`
- This setting is temporary for the purposes of the migration. A new Vault `StatefulSet` with one replica to confirm the init container completed the migration and unseal Vault using the new storage backend.
1. Apply this configuration.
```shell-session
$ helm install vault hashicorp/vault -f vault-migration-values.yml
```
1. Review the migration logs.
```shell-session
$ kubectl logs vault-0 -c vault-server-migration
```
1. Unseal Vault.
```shell-session
$ kubectl exec -it vault-0 -- vault operator unseal
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.14.8+ent
Build Date 2023-12-05T01:49:39Z
Storage Type raft
Cluster Name vault-cluster-68870bf8
Cluster ID cd18c692-f2e3-77a5-fba3-28f06f41f375
HA Enabled true
HA Cluster https://vault-0.vault-internal:8201
HA Mode active
Active Since 2024-04-10T04:20:23.707098402Z
Raft Committed Index 157
Raft Applied Index 157
Last WAL 55
```
1. Update Vault Helm deployment with Raft storage configuration.
```shell-session
$ cat > vault-raft-values.yml <<EOF
global:
enabled: false
server:
enabled: true
image:
repository: hashicorp/vault-enterprise
tag: 1.14.8-ent
enterpriseLicense:
secretName: vault-license
secretKey: vault.hclic
dataStorage:
enabled: true
size: "1Gi"
ha:
enabled: true
replicas: 5
raft:
enabled: true
config: |
ui = true
service_registration "kubernetes" {}
listener "tcp" {
address = ":8200"
cluster_address = ":8201"
tls_disable = 1
}
storage "raft" {
path = "/vault/data"
retry_join {
auto_join_scheme = "http"
auto_join = "provider=k8s"
}
}
EOF
```
**Configuration notes**
- `replicas: 5`
- Upgrade the Helm deployment in place using the final Raft storage configuration, removing the `extraInitContainer` and storage migration `ConfigMap`, and increasing the number of replicas. The `retry_join` parameters used by the new Vault server replicas to automatically join the cluster.
1. Apply the configuration.
```shell-session
$ helm upgrade vault hashicorp/vault -f vault-raft-values.yml
```
1. Unseal Vault.
```shell-session
$ for i in {1..4} ; do kubectl exec -it vault-0 -- vault operator unseal ; done
```
1. Confirm the Raft peers have formed a quorum.
```shell-session
$ kubectl exec -it vault-0 -- vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
24c166d8-a8bb-3ac7-f8a0-12bd066a34bb vault-0.vault-internal:8201 leader true
626434d1-170b-575a-2a04-af4f2e90820b vault-1.vault-internal:8201 follower true
1dfbba31-9b5b-2d16-18ce-bfa7b6c0ead6 vault-2.vault-internal:8201 follower true
3f333082-1a64-7559-0142-e4f1658a28f3 vault-3.vault-internal:8201 follower true
9ca5a15e-3ddc-d132-0b46-5b895f3828dc vault-4.vault-internal:8201 follower true
```
## Rollback procedure
To revert to the original configuration, you'll just need to delete the Helm deployment, and re-deploy it using the override values specifying your Consul storage configuration.
Note that the Vault Helm Chart's default configuration using Raft storage will retain any PVCs created. Vault does not use these while configured with Consul storage. You will need to remove the PVCs before re-attempting the migration.
1. Uninstall Vault via Helm.
```shell-session
$ helm uninstall vault
```
1. Install Vault via Helm with old Consul storage configuration.
```shell-session
$ `helm install vault hashicorp/vault -f vault-consul-values.yml
```
1. Unseal Vault and confirm the storage has reverted to Consul.
<CodeBlockConfig highlight="11">
```she-session
$ kubectl exec -it vault-0 -- vault status
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.14.8+ent
Build Date 2023-12-05T01:49:39Z
Storage Type consul
Cluster Name vault-cluster-68870bf8
Cluster ID cd18c692-f2e3-77a5-fba3-28f06f41f375
HA Enabled true
HA Cluster https://vault-0.vault-internal:8201
HA Mode active
Active Since 2024-04-10T04:44:12.516016652Z
Last WAL 54
```
</CodeBlockConfig>
## References
- [Vault operator migrate command](/vault/docs/commands/operator/migrate)
- [Helm Chart configuration](/vault/docs/platform/k8s/helm/configuration)
- [Vault on Kubernetes deployment guide](/vault/tutorials/kubernetes/kubernetes-raft-deployment-guide)
- [Vault Helm Chart configuration](https://github.com/hashicorp/vault-helm)
- [kubectl commands](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands)
- [Kubernetes storage volumes](https://kubernetes.io/docs/concepts/storage/volumes/)
- [Create a Pod that has an Init Container](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
- [Helm docs](https://helm.sh/docs/) | vault | layout docs page title Migrate Consul to Raft storage description Guide to migration of Consul storage to Raft Migrate Consul to Raft storage This procedure assumes you have a Vault cluster deployed in a Kubernetes environment configured with Consul storage The storage migration can occur while leaving the Consul cluster intact A single change to the Consul cluster is a lock file written by Vault during the migration This guide uses basic examples and default Vault configurations It is for illustrative purposes and adaption to specific configurations relevant to your environment is still required Warning title Back up data Always back up your data before attempting migration Although this is an offline operation and the risk is low it is advisable to take a recent snapshot from your Consul cluster before proceeding Warning Overview This guide uses an intermediate Helm configuration to introduce an init container that will perform the storage migration and then start a single Vault server using the Raft storage backend to verify the results Then update the Helm configuration to remove the init container and start Vault replicas Vault and Kubernetes setup Consider the following vault status output and Helm Chart values for Vault CodeBlockConfig hideClipboard plaintext Key Value Seal Type shamir Initialized true Sealed false Total Shares 1 Threshold 1 Version 1 14 8 ent Build Date 2023 12 05T01 49 39Z Storage Type consul Cluster Name vault cluster 68870bf8 Cluster ID cd18c692 f2e3 77a5 fba3 28f06f41f375 HA Enabled true HA Cluster https vault 0 vault internal 8201 HA Mode active Active Since 2024 04 10T02 45 33 367042122Z Last WAL 52 CodeBlockConfig Helm chart values CodeBlockConfig hideClipboard plaintext global enabled false server enabled true image repository hashicorp vault enterprise tag 1 14 8 ent enterpriseLicense secretName vault license secretKey vault hclic ha enabled true replicas 3 config ui true service registration kubernetes listener tcp address 8200 cluster address 8201 tls disable 1 storage consul path vault address http HOST IP 8500 CodeBlockConfig Migration procedure 1 Uninstall Vault via Helm shell session helm uninstall vault Deployed StatefulSets cannot have certain attributes modified after their initial deployment Therefore the StatefulSet deployment must be entirely replaced Vault servers using Consul storage are by default stateless Unless explicitly configured the Vault server StatefulSet does not create any Persistent Volume Claims PVC or other artifacts Vault s index holds its state which is entirely stored in the Consul server StatefulSet s persistent volumes Warning title Caution It is strongly advised to review your Vault deployment configurations and take appropriate backups for any stateful information managed via Helm or other orchestration platforms Warning 1 Create a ConfigMap containing the Storage Migration configuration shell session cat vault storage migration configmap yml EOF apiVersion v1 kind ConfigMap metadata labels app kubernetes io instance vault app kubernetes io name vault name storage migration namespace default data migrate hcl storage source consul address http consul server default svc cluster local 8500 path vault storage destination raft path vault data cluster addr https vault 0 vault internal 8201 EOF Often your Vault server should communicate to Consul via a Consul client agent This example uses the service endpoint for a Consul server deployed in Kubernetes although it can work for a Consul server cluster deployed outside of Kubernetes as well 1 Apply the ConfigMap shell session kubectl create f vault storage migration configmap yml 1 Install Vault via Helm deployment with Raft Migration storage configuration shell session cat vault migration values yml EOF global enabled false server enabled true image repository hashicorp vault enterprise tag 1 14 8 ent enterpriseLicense secretName vault license secretKey vault hclic extraInitContainers name vault storage migration image hashicorp vault enterprise 1 14 8 ent command bin sh ec args bin vault operator migrate config vault storage migration migrate hcl volumeMounts name storage migration mountPath vault storage migration name data mountPath vault data volumeMounts name storage migration mountPath vault storage migration volumes name storage migration configMap name storage migration dataStorage enabled true size 1Gi ha enabled true replicas 1 raft enabled true config ui true service registration kubernetes listener tcp address 8200 cluster address 8201 tls disable 1 storage raft path vault data retry join auto join scheme http auto join provider k8s EOF Configuration notes storage raft configuration to specify the path for the Raft DB vault data by default and any retry join parameters in your original configuration This example uses auto join to automatically find Raft peers via the Kubernetes API See the retry join vault docs configuration storage raft retry join stanza for more information dataStorage configuration in the Helm override values to specify the parameters of the PVCs the Vault StatefulSet will create extraInitContainers will start an init container mounting the storage migration ConfigMap and data volume which it will then use to execute the storage migration replicas 1 This setting is temporary for the purposes of the migration A new Vault StatefulSet with one replica to confirm the init container completed the migration and unseal Vault using the new storage backend 1 Apply this configuration shell session helm install vault hashicorp vault f vault migration values yml 1 Review the migration logs shell session kubectl logs vault 0 c vault server migration 1 Unseal Vault shell session kubectl exec it vault 0 vault operator unseal Key Value Seal Type shamir Initialized true Sealed false Total Shares 1 Threshold 1 Version 1 14 8 ent Build Date 2023 12 05T01 49 39Z Storage Type raft Cluster Name vault cluster 68870bf8 Cluster ID cd18c692 f2e3 77a5 fba3 28f06f41f375 HA Enabled true HA Cluster https vault 0 vault internal 8201 HA Mode active Active Since 2024 04 10T04 20 23 707098402Z Raft Committed Index 157 Raft Applied Index 157 Last WAL 55 1 Update Vault Helm deployment with Raft storage configuration shell session cat vault raft values yml EOF global enabled false server enabled true image repository hashicorp vault enterprise tag 1 14 8 ent enterpriseLicense secretName vault license secretKey vault hclic dataStorage enabled true size 1Gi ha enabled true replicas 5 raft enabled true config ui true service registration kubernetes listener tcp address 8200 cluster address 8201 tls disable 1 storage raft path vault data retry join auto join scheme http auto join provider k8s EOF Configuration notes replicas 5 Upgrade the Helm deployment in place using the final Raft storage configuration removing the extraInitContainer and storage migration ConfigMap and increasing the number of replicas The retry join parameters used by the new Vault server replicas to automatically join the cluster 1 Apply the configuration shell session helm upgrade vault hashicorp vault f vault raft values yml 1 Unseal Vault shell session for i in 1 4 do kubectl exec it vault 0 vault operator unseal done 1 Confirm the Raft peers have formed a quorum shell session kubectl exec it vault 0 vault operator raft list peers Node Address State Voter 24c166d8 a8bb 3ac7 f8a0 12bd066a34bb vault 0 vault internal 8201 leader true 626434d1 170b 575a 2a04 af4f2e90820b vault 1 vault internal 8201 follower true 1dfbba31 9b5b 2d16 18ce bfa7b6c0ead6 vault 2 vault internal 8201 follower true 3f333082 1a64 7559 0142 e4f1658a28f3 vault 3 vault internal 8201 follower true 9ca5a15e 3ddc d132 0b46 5b895f3828dc vault 4 vault internal 8201 follower true Rollback procedure To revert to the original configuration you ll just need to delete the Helm deployment and re deploy it using the override values specifying your Consul storage configuration Note that the Vault Helm Chart s default configuration using Raft storage will retain any PVCs created Vault does not use these while configured with Consul storage You will need to remove the PVCs before re attempting the migration 1 Uninstall Vault via Helm shell session helm uninstall vault 1 Install Vault via Helm with old Consul storage configuration shell session helm install vault hashicorp vault f vault consul values yml 1 Unseal Vault and confirm the storage has reverted to Consul CodeBlockConfig highlight 11 she session kubectl exec it vault 0 vault status Key Value Seal Type shamir Initialized true Sealed false Total Shares 1 Threshold 1 Version 1 14 8 ent Build Date 2023 12 05T01 49 39Z Storage Type consul Cluster Name vault cluster 68870bf8 Cluster ID cd18c692 f2e3 77a5 fba3 28f06f41f375 HA Enabled true HA Cluster https vault 0 vault internal 8201 HA Mode active Active Since 2024 04 10T04 44 12 516016652Z Last WAL 54 CodeBlockConfig References Vault operator migrate command vault docs commands operator migrate Helm Chart configuration vault docs platform k8s helm configuration Vault on Kubernetes deployment guide vault tutorials kubernetes kubernetes raft deployment guide Vault Helm Chart configuration https github com hashicorp vault helm kubectl commands https kubernetes io docs reference generated kubectl kubectl commands Kubernetes storage volumes https kubernetes io docs concepts storage volumes Create a Pod that has an Init Container https kubernetes io docs tasks configure pod container configure pod initialization create a pod that has an init container Helm docs https helm sh docs |
vault This document explores two different methods for integrating HashiCorp Vault with Kubernetes The information provided is intended for DevOps practitioners who understand secret management concepts and are familiar with HashiCorp Vault and Kubernetes This document also offers practical guidance to help you understand and choose the best method for your use case Agent injector vs Vault CSI provider This section compares Sidecar Injector and Vault CSI Provider for Kubernetes and Vault integration layout docs page title Agent Injector vs Vault CSI Provider | ---
layout: docs
page_title: Agent Injector vs. Vault CSI Provider
description: This section compares Sidecar Injector and Vault CSI Provider for Kubernetes and Vault integration.
---
# Agent injector vs. Vault CSI provider
This document explores two different methods for integrating HashiCorp Vault with Kubernetes. The information provided is intended for DevOps practitioners who understand secret management concepts and are familiar with HashiCorp Vault and Kubernetes. This document also offers practical guidance to help you understand and choose the best method for your use case.
Information contained within this document details the contrast between the Agent Injector, also referred as _Vault Sidecar_ or _Sidecar_ in this document, and the Vault Container Storage Interface (CSI) provider used to integrate Vault and Kubernetes.
## Vault sidecar agent injector
The [Vault Sidecar Agent Injector](/vault/docs/platform/k8s/injector) leverages the [sidecar pattern](https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar) to alter pod specifications to include a Vault Agent container that renders Vault secrets to a shared memory volume. By rendering secrets to a shared volume, containers within the pod can consume Vault secrets without being Vault-aware. The injector is a Kubernetes mutating webhook controller. The controller intercepts pod events and applies mutations to the pod if annotations exist within the request. This functionality is provided by the [vault-k8s](https://github.com/hashicorp/vault-k8s) project and can be automatically installed and configured using the Vault Helm chart.

## Vault CSI provider
The [Vault CSI provider](/vault/docs/platform/k8s/csi) allows pods to consume Vault secrets by using ephemeral [CSI Secrets Store](https://github.com/kubernetes-sigs/secrets-store-csi-driver) volumes. At a high level, the CSI Secrets Store driver enables users to create `SecretProviderClass` objects. These objects define which secret provider to use and what secrets to retrieve. When pods requesting CSI volumes are made, the CSI Secrets Store driver sends the request to the Vault CSI provider if the provider is `vault`. The Vault CSI provider then uses the specified `SecretProviderClass` and the pod’s service account to retrieve the secrets from Vault and mount them into the pod’s CSI volume. Note that the secret is retrieved from Vault and populated to the CSI secrets store volume during the `ContainerCreation` phase. Therefore, pods are blocked from starting until the secrets are read from Vault and written to the volume.

~> **Note**: Secrets are fetched earlier in the pod lifecycle, therefore, they have fewer compatibility issues with Sidecars, such as Istio.
Before we get into some of the similarities and differences between the two solutions, let's look at several common design considerations.
- **Secret projections:** Every application requires secrets to explicitly presented. Typically, applications expect secrets to be either exported as environment variables or written to a file that the application can read on startup. Keep that in mind as you’re deciding on a suitable method to use.
- **Secret scope:** Some applications are deployed across multiple Kubernetes environments (e.g., dev, qa, prod) across your data centers, the edge, or public clouds. Some services run outside of Kubernetes on VMs, serverless, or other cloud-managed services. You may face scenarios where these applications need to share sets of secrets across these heterogeneous environments. Scoping the secrets correctly to be either local to the Kubernetes environment or global across different environments helps ensure that each application can easily and securely access its own set of secrets within the environment it is deployed in.
- **Secret types:** Secrets can be text files, binary files, tokens, or certs, or they can be statically or dynamically generated. They can also be valid permanently or time-scoped, and can vary in size. You need to consider the secret types your application requires and how they’re projected into the application.
- **Secret definition:** You also need to consider how each secret is defined, created, updated, and removed, as well as the tooling associated with that process.
- **Encryption:** Encrypting secrets both at rest and in transit is a critical requirement for many enterprise organizations.
- **Governance:** Applications and secrets can have a many-to-many relationship that requires careful considerations when granting access for applications to retrieve their respective secrets. As the number of applications and secrets scale, so does the challenge of managing their access policies.
- **Secrets updates and rotation:** Secrets can be leased, time-scoped, or automatically rotated, and each scenario needs to be a programmatic process to ensure the new secret is propagated to the application pods properly.
- **Secret caching:** In certain Kubernetes environments (e.g., edge or retail), there is a potential need for secret caching in the case of communication or network failures between the environment and the secret storage.
- **Auditability:** Keeping a secret access audit log detailing all secret access information is critical to ensure traceability of secret-access events.
Now that you're familiar with some of the design considerations, we'll explore the similarities and differences between the two solutions to help you determine the best solution to use as you design and implement your secrets management strategy in a Kubernetes environment.
## Similarities
Both Agent Injection and Vault CSI solutions have the following similarities:
- They simplify retrieving different types of secrets stored in Vault and expose them to the target pod running on Kubernetes without knowing the not-so-trivial Vault processes. It’s important to note that there is no need to change the application logic or code to use these solutions, therefore, making it easier to migrate brownfield applications into Kubernetes. Developers working on greenfield applications can leverage the Vault SDKs to integrate with Vault directly.
- They support all types of Vault [secrets engines](/vault/docs/secrets). This support allows you to leverage an extensive set of secret types, ranging from static key-value secrets to dynamically generated database credentials and TLS certs with customized TTL.
- They leverage the application’s Kubernetes pod service account token as [Secret Zero](https://www.hashicorp.com/resources/secret-zero-mitigating-the-risk-of-secret-introduction-with-vault) to authenticate with Vault via the Kubernetes auth method. With this method, there is no need to manage yet another separate identity to identify the application pods when authenticating to Vault.
- Secret lifetime is tied to the lifetime of the pod for both methods. While this holds true for file contents inside the pod, this also holds true for Kubernetes secrets that CSI creates. Secrets are automatically created and deleted as the pod is created and deleted.

- They require the desired secrets to exist within Vault before deploying the application.
- They require the pod’s service account to bind to a Vault role with a policy enabling access to desired secrets (that is, Kubernetes RBAC isn’t used to authorize access to secrets).
- They can both be deployed via Helm.
- They require successfully retrieving secrets from Vault before the pods are started.
- They rely on user-defined pod annotations to retrieve the required secrets from Vault.
## Differences
Now that you understand the similarities, there are differences between these two solutions for considerations:
- The Sidecar Agent Injector solution is composed of two elements:
- The Sidecar Service Injector, which is deployed as a cluster service and is responsible for intercepting Kubernetes apiserver pod events and mutating pod specs to add required sidecar containers
- The Vault Sidecar Container, which is deployed alongside each application pod and is responsible for authenticating into Vault, retrieving secrets from Vault, and rendering secrets for the application to consume.
- In contrast, the Vault CSI Driver is deployed as a daemonset on every node in the Kubernetes cluster and uses the Secret Provider Class specified and the pod’s service account to retrieve the secrets from Vault and mount them into the pod’s CSI volume.
- The Sidecar Agent Injector supports [all](/vault/docs/platform/k8s/injector/annotations#vault-hashicorp-com-auth-path) Vault [auto-auth](/vault/docs/agent-and-proxy/autoauth/methods) methods. The Sidecar CSI driver supports only Vault’s [Kubernetes auth method](/vault/docs/platform/k8s/csi/configurations#vaultkubernetesmountpath).
- The Sidecar container launched with every application pod uses [Vault Agent](https://www.hashicorp.com/blog/why-use-the-vault-agent-for-secrets-management), which provides a powerful set of capabilities such as auto-auth, templating, and caching. The CSI driver does not use the Vault Agent and therefore lacks these functionalities.
- The Vault CSI driver supports rendering Vault secrets into Kubernetes secrets and environment variables. Sidecar Injector Service does not support rendering secrets into Kubernetes secrets; however, there are ways to [agent templating](/vault/docs/platform/k8s/injector/examples#environment-variable-example) to render secrets into environment variables.
- The CSI driver uses `hostPath` to mount ephemeral volumes into the pods, which some container platforms (e.g., OpenShift) disable by default. On the other hand, Sidecar Agent Service uses in-memory _tmpfs_ volumes.
- Sidecar Injector Service [automatically](/vault/docs/agent-and-proxy/agent/template#renewals-and-updating-secrets) renews, rotates, and fetches secrets/tokens while the CSI Driver does not support that.
## Comparison chart
The below chart provides a high-level comparison between the two solutions.
~> **Note:** Shared Memory Volume Environment Variable can be achieved through [Agent templating](/vault/docs/platform/k8s/injector/examples#environment-variable-example).

## Going beyond the native kubernetes secrets
On the surface, Kubernetes native secrets might seem similar to the two approaches presented above, but there are significant differences between them:
- Kubernetes is not a secrets management solution. It does have native support for secrets, but that is quite different from an enterprise secrets management solution. Kubernetes secrets are scoped to the cluster only, and many applications will have some services running outside Kubernetes or in other Kubernetes clusters. Having these applications use Kubernetes secrets from outside a Kubernetes environment will be cumbersome and introduce authentication and authorization challenges. Therefore, considering the secret scope as part of the design process is critical.
- Kubernetes secrets are static in nature. You can define secrets by using kubectl or the Kubernetes API, but once they are defined, they are stored in etcd and presented to pods only during pod creation. Defining secrets in this manner may create scenarios where secrets get stale, outdated, or expired, requiring additional workflows to update and rotate the secrets, and then re-deploy the application to use the new version, which can add complexity and become quite time-consuming. Ensure consideration is given to all requirements for secret freshness, updates, and rotation as part of your design process.
- The secret access management security model is tied to the Kubernetes RBAC model. This model can be challenging for users who are not familiar with Kubernetes. Adopting a platform-agnostic security governance model can enable you to adapt workflows for applications regardless of how and where they are running.
## Summary
Designing secrets management in Kubernetes is an intricate task. There are multiple approaches, each with its own set of attributes. We recommend exploring the options presented in this document to increase your understanding of the internals and decide on the best option for your use case.
## Additional resources
- [HashiCorp Vault: Delivering Secrets with Kubernetes](https://medium.com/hashicorp-engineering/hashicorp-vault-delivering-secrets-with-kubernetes-1b358c03b2a3)
- [Retrieve HashiCorp Vault Secrets with Kubernetes CSI](https://www.hashicorp.com/blog/retrieve-hashicorp-vault-secrets-with-kubernetes-csi)
- [Mount Vault Secrets Through Container Storage Interface (CSI) Volume](/vault/tutorials/kubernetes/kubernetes-secret-store-driver)
- [Injecting Secrets into Kubernetes Pods via Vault Agent Containers](/vault/tutorials/kubernetes/kubernetes-sidecar)
- [Vault Sidecar Injector Configurations and Examples](/vault/docs/platform/k8s/injector/annotations)
- [Vault CSI Driver Configurations and Examples](/vault/docs/platform/k8s/csi/configurations) | vault | layout docs page title Agent Injector vs Vault CSI Provider description This section compares Sidecar Injector and Vault CSI Provider for Kubernetes and Vault integration Agent injector vs Vault CSI provider This document explores two different methods for integrating HashiCorp Vault with Kubernetes The information provided is intended for DevOps practitioners who understand secret management concepts and are familiar with HashiCorp Vault and Kubernetes This document also offers practical guidance to help you understand and choose the best method for your use case Information contained within this document details the contrast between the Agent Injector also referred as Vault Sidecar or Sidecar in this document and the Vault Container Storage Interface CSI provider used to integrate Vault and Kubernetes Vault sidecar agent injector The Vault Sidecar Agent Injector vault docs platform k8s injector leverages the sidecar pattern https docs microsoft com en us azure architecture patterns sidecar to alter pod specifications to include a Vault Agent container that renders Vault secrets to a shared memory volume By rendering secrets to a shared volume containers within the pod can consume Vault secrets without being Vault aware The injector is a Kubernetes mutating webhook controller The controller intercepts pod events and applies mutations to the pod if annotations exist within the request This functionality is provided by the vault k8s https github com hashicorp vault k8s project and can be automatically installed and configured using the Vault Helm chart Vault Sidecar Injection Workflow img vault sidecar inject workflow png Vault CSI provider The Vault CSI provider vault docs platform k8s csi allows pods to consume Vault secrets by using ephemeral CSI Secrets Store https github com kubernetes sigs secrets store csi driver volumes At a high level the CSI Secrets Store driver enables users to create SecretProviderClass objects These objects define which secret provider to use and what secrets to retrieve When pods requesting CSI volumes are made the CSI Secrets Store driver sends the request to the Vault CSI provider if the provider is vault The Vault CSI provider then uses the specified SecretProviderClass and the pod s service account to retrieve the secrets from Vault and mount them into the pod s CSI volume Note that the secret is retrieved from Vault and populated to the CSI secrets store volume during the ContainerCreation phase Therefore pods are blocked from starting until the secrets are read from Vault and written to the volume Vault Sidecar Injection Workflow img vault csi workflow png Note Secrets are fetched earlier in the pod lifecycle therefore they have fewer compatibility issues with Sidecars such as Istio Before we get into some of the similarities and differences between the two solutions let s look at several common design considerations Secret projections Every application requires secrets to explicitly presented Typically applications expect secrets to be either exported as environment variables or written to a file that the application can read on startup Keep that in mind as you re deciding on a suitable method to use Secret scope Some applications are deployed across multiple Kubernetes environments e g dev qa prod across your data centers the edge or public clouds Some services run outside of Kubernetes on VMs serverless or other cloud managed services You may face scenarios where these applications need to share sets of secrets across these heterogeneous environments Scoping the secrets correctly to be either local to the Kubernetes environment or global across different environments helps ensure that each application can easily and securely access its own set of secrets within the environment it is deployed in Secret types Secrets can be text files binary files tokens or certs or they can be statically or dynamically generated They can also be valid permanently or time scoped and can vary in size You need to consider the secret types your application requires and how they re projected into the application Secret definition You also need to consider how each secret is defined created updated and removed as well as the tooling associated with that process Encryption Encrypting secrets both at rest and in transit is a critical requirement for many enterprise organizations Governance Applications and secrets can have a many to many relationship that requires careful considerations when granting access for applications to retrieve their respective secrets As the number of applications and secrets scale so does the challenge of managing their access policies Secrets updates and rotation Secrets can be leased time scoped or automatically rotated and each scenario needs to be a programmatic process to ensure the new secret is propagated to the application pods properly Secret caching In certain Kubernetes environments e g edge or retail there is a potential need for secret caching in the case of communication or network failures between the environment and the secret storage Auditability Keeping a secret access audit log detailing all secret access information is critical to ensure traceability of secret access events Now that you re familiar with some of the design considerations we ll explore the similarities and differences between the two solutions to help you determine the best solution to use as you design and implement your secrets management strategy in a Kubernetes environment Similarities Both Agent Injection and Vault CSI solutions have the following similarities They simplify retrieving different types of secrets stored in Vault and expose them to the target pod running on Kubernetes without knowing the not so trivial Vault processes It s important to note that there is no need to change the application logic or code to use these solutions therefore making it easier to migrate brownfield applications into Kubernetes Developers working on greenfield applications can leverage the Vault SDKs to integrate with Vault directly They support all types of Vault secrets engines vault docs secrets This support allows you to leverage an extensive set of secret types ranging from static key value secrets to dynamically generated database credentials and TLS certs with customized TTL They leverage the application s Kubernetes pod service account token as Secret Zero https www hashicorp com resources secret zero mitigating the risk of secret introduction with vault to authenticate with Vault via the Kubernetes auth method With this method there is no need to manage yet another separate identity to identify the application pods when authenticating to Vault Secret lifetime is tied to the lifetime of the pod for both methods While this holds true for file contents inside the pod this also holds true for Kubernetes secrets that CSI creates Secrets are automatically created and deleted as the pod is created and deleted Vault s Kubernetes auth workflow img k8s auth workflow png They require the desired secrets to exist within Vault before deploying the application They require the pod s service account to bind to a Vault role with a policy enabling access to desired secrets that is Kubernetes RBAC isn t used to authorize access to secrets They can both be deployed via Helm They require successfully retrieving secrets from Vault before the pods are started They rely on user defined pod annotations to retrieve the required secrets from Vault Differences Now that you understand the similarities there are differences between these two solutions for considerations The Sidecar Agent Injector solution is composed of two elements The Sidecar Service Injector which is deployed as a cluster service and is responsible for intercepting Kubernetes apiserver pod events and mutating pod specs to add required sidecar containers The Vault Sidecar Container which is deployed alongside each application pod and is responsible for authenticating into Vault retrieving secrets from Vault and rendering secrets for the application to consume In contrast the Vault CSI Driver is deployed as a daemonset on every node in the Kubernetes cluster and uses the Secret Provider Class specified and the pod s service account to retrieve the secrets from Vault and mount them into the pod s CSI volume The Sidecar Agent Injector supports all vault docs platform k8s injector annotations vault hashicorp com auth path Vault auto auth vault docs agent and proxy autoauth methods methods The Sidecar CSI driver supports only Vault s Kubernetes auth method vault docs platform k8s csi configurations vaultkubernetesmountpath The Sidecar container launched with every application pod uses Vault Agent https www hashicorp com blog why use the vault agent for secrets management which provides a powerful set of capabilities such as auto auth templating and caching The CSI driver does not use the Vault Agent and therefore lacks these functionalities The Vault CSI driver supports rendering Vault secrets into Kubernetes secrets and environment variables Sidecar Injector Service does not support rendering secrets into Kubernetes secrets however there are ways to agent templating vault docs platform k8s injector examples environment variable example to render secrets into environment variables The CSI driver uses hostPath to mount ephemeral volumes into the pods which some container platforms e g OpenShift disable by default On the other hand Sidecar Agent Service uses in memory tmpfs volumes Sidecar Injector Service automatically vault docs agent and proxy agent template renewals and updating secrets renews rotates and fetches secrets tokens while the CSI Driver does not support that Comparison chart The below chart provides a high level comparison between the two solutions Note Shared Memory Volume Environment Variable can be achieved through Agent templating vault docs platform k8s injector examples environment variable example Comparison Chart img comparison table png Going beyond the native kubernetes secrets On the surface Kubernetes native secrets might seem similar to the two approaches presented above but there are significant differences between them Kubernetes is not a secrets management solution It does have native support for secrets but that is quite different from an enterprise secrets management solution Kubernetes secrets are scoped to the cluster only and many applications will have some services running outside Kubernetes or in other Kubernetes clusters Having these applications use Kubernetes secrets from outside a Kubernetes environment will be cumbersome and introduce authentication and authorization challenges Therefore considering the secret scope as part of the design process is critical Kubernetes secrets are static in nature You can define secrets by using kubectl or the Kubernetes API but once they are defined they are stored in etcd and presented to pods only during pod creation Defining secrets in this manner may create scenarios where secrets get stale outdated or expired requiring additional workflows to update and rotate the secrets and then re deploy the application to use the new version which can add complexity and become quite time consuming Ensure consideration is given to all requirements for secret freshness updates and rotation as part of your design process The secret access management security model is tied to the Kubernetes RBAC model This model can be challenging for users who are not familiar with Kubernetes Adopting a platform agnostic security governance model can enable you to adapt workflows for applications regardless of how and where they are running Summary Designing secrets management in Kubernetes is an intricate task There are multiple approaches each with its own set of attributes We recommend exploring the options presented in this document to increase your understanding of the internals and decide on the best option for your use case Additional resources HashiCorp Vault Delivering Secrets with Kubernetes https medium com hashicorp engineering hashicorp vault delivering secrets with kubernetes 1b358c03b2a3 Retrieve HashiCorp Vault Secrets with Kubernetes CSI https www hashicorp com blog retrieve hashicorp vault secrets with kubernetes csi Mount Vault Secrets Through Container Storage Interface CSI Volume vault tutorials kubernetes kubernetes secret store driver Injecting Secrets into Kubernetes Pods via Vault Agent Containers vault tutorials kubernetes kubernetes sidecar Vault Sidecar Injector Configurations and Examples vault docs platform k8s injector annotations Vault CSI Driver Configurations and Examples vault docs platform k8s csi configurations |
vault The following examples demonstrate how the Vault CSI Provider can be used page title Vault CSI Provider Examples This section documents examples of using the Vault CSI Provider layout docs Vault CSI provider examples | ---
layout: docs
page_title: Vault CSI Provider Examples
description: This section documents examples of using the Vault CSI Provider.
---
# Vault CSI provider examples
The following examples demonstrate how the Vault CSI Provider can be used.
~> A common mistake is to not install the CSI Secret Store Driver before using the Vault CSI Provider.
## File based dynamic database credentials
The following Secret Provider Class retrieves dynamic database credentials from Vault and
extracts the generated username and password. The secrets are then mounted as files in the
configured mount location.
```yaml
---
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: vault-db-creds
spec:
provider: vault
parameters:
roleName: 'app'
objects: |
- objectName: "dbUsername"
secretPath: "database/creds/db-app"
secretKey: "username"
- objectName: "dbPassword"
secretPath: "database/creds/db-app"
secretKey: "password"
```
Next, a pod can be created to use this Secret Provider Class to populate the secrets in the pod:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: demo
spec:
selector:
matchLabels:
app: demo
replicas: 1
template:
metadata:
annotations:
labels:
app: demo
spec:
serviceAccountName: app
containers:
- name: app
image: my-app:1.0.0
volumeMounts:
- name: 'vault-db-creds'
mountPath: '/mnt/secrets-store'
readOnly: true
volumes:
- name: vault-db-creds
csi:
driver: 'secrets-store.csi.k8s.io'
readOnly: true
volumeAttributes:
secretProviderClass: 'vault-db-creds'
```
The pod mounts a CSI volume and specifies the Secret Provider Class (`vault-db-creds`) created above.
The secrets created from that provider class are mounted to `/mnt/secrets-store`. When this pod is
created the containers will find two files containing secrets:
- `/mnt/secrets-store/dbUsername`
- `/mnt/secrets-store/dbPassword`
## Environment variable dynamic database credentials
The following Secret Provider Class retrieves dynamic database credentials from Vault and
extracts the generated username and password. The secrets are then synced to Kubernetes secrets
so that they can be mounted as environment variables in the containers.
```yaml
---
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: vault-db-creds
spec:
provider: vault
secretObjects:
- secretName: vault-db-creds-secret
type: Opaque
data:
- objectName: dbUsername # References dbUsername below
key: username # Key within k8s secret for this value
- objectName: dbPassword
key: password
parameters:
roleName: 'app'
objects: |
- objectName: "dbUsername"
secretPath: "database/creds/db-app"
secretKey: "username"
- objectName: "dbPassword"
secretPath: "database/creds/db-app"
secretKey: "password"
```
Next, a pod can be created which uses this Secret Provider Class to populate the secrets in the pod's environment:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: demo
spec:
selector:
matchLabels:
app: demo
replicas: 1
template:
metadata:
annotations:
labels:
app: demo
spec:
serviceAccountName: app
containers:
- name: app
image: my-app:1.0.0
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: vault-db-creds-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: vault-db-creds-secret
key: password
volumeMounts:
- name: 'vault-db-creds'
mountPath: '/mnt/secrets-store'
readOnly: true
volumes:
- name: vault-db-creds
csi:
driver: 'secrets-store.csi.k8s.io'
readOnly: true
volumeAttributes:
secretProviderClass: 'vault-db-creds'
```
The pod mounts a CSI volume and specifies the Secret Provider Class (`vault-db-creds`) created above.
The secrets created from that provider class are mounted to `/mnt/secrets-store`, additionally a Kubernetes
secret called `vault-db-creds` is created and referenced in two environment variables. | vault | layout docs page title Vault CSI Provider Examples description This section documents examples of using the Vault CSI Provider Vault CSI provider examples The following examples demonstrate how the Vault CSI Provider can be used A common mistake is to not install the CSI Secret Store Driver before using the Vault CSI Provider File based dynamic database credentials The following Secret Provider Class retrieves dynamic database credentials from Vault and extracts the generated username and password The secrets are then mounted as files in the configured mount location yaml apiVersion secrets store csi x k8s io v1alpha1 kind SecretProviderClass metadata name vault db creds spec provider vault parameters roleName app objects objectName dbUsername secretPath database creds db app secretKey username objectName dbPassword secretPath database creds db app secretKey password Next a pod can be created to use this Secret Provider Class to populate the secrets in the pod yaml apiVersion apps v1 kind Deployment metadata name app labels app demo spec selector matchLabels app demo replicas 1 template metadata annotations labels app demo spec serviceAccountName app containers name app image my app 1 0 0 volumeMounts name vault db creds mountPath mnt secrets store readOnly true volumes name vault db creds csi driver secrets store csi k8s io readOnly true volumeAttributes secretProviderClass vault db creds The pod mounts a CSI volume and specifies the Secret Provider Class vault db creds created above The secrets created from that provider class are mounted to mnt secrets store When this pod is created the containers will find two files containing secrets mnt secrets store dbUsername mnt secrets store dbPassword Environment variable dynamic database credentials The following Secret Provider Class retrieves dynamic database credentials from Vault and extracts the generated username and password The secrets are then synced to Kubernetes secrets so that they can be mounted as environment variables in the containers yaml apiVersion secrets store csi x k8s io v1alpha1 kind SecretProviderClass metadata name vault db creds spec provider vault secretObjects secretName vault db creds secret type Opaque data objectName dbUsername References dbUsername below key username Key within k8s secret for this value objectName dbPassword key password parameters roleName app objects objectName dbUsername secretPath database creds db app secretKey username objectName dbPassword secretPath database creds db app secretKey password Next a pod can be created which uses this Secret Provider Class to populate the secrets in the pod s environment yaml apiVersion apps v1 kind Deployment metadata name app labels app demo spec selector matchLabels app demo replicas 1 template metadata annotations labels app demo spec serviceAccountName app containers name app image my app 1 0 0 env name DB USERNAME valueFrom secretKeyRef name vault db creds secret key username name DB PASSWORD valueFrom secretKeyRef name vault db creds secret key password volumeMounts name vault db creds mountPath mnt secrets store readOnly true volumes name vault db creds csi driver secrets store csi k8s io readOnly true volumeAttributes secretProviderClass vault db creds The pod mounts a CSI volume and specifies the Secret Provider Class vault db creds created above The secrets created from that provider class are mounted to mnt secrets store additionally a Kubernetes secret called vault db creds is created and referenced in two environment variables |
vault The Vault CSI Provider allows pods to consume Vault secrets using Vault CSI provider layout docs The Vault CSI Provider allows pods to consume Vault secrets using CSI volumes page title Vault CSI Provider | ---
layout: docs
page_title: Vault CSI Provider
description: >-
The Vault CSI Provider allows pods to consume Vault secrets using CSI volumes.
---
# Vault CSI provider
The Vault CSI Provider allows pods to consume Vault secrets using
[CSI Secrets Store](https://github.com/kubernetes-sigs/secrets-store-csi-driver) volumes.
~> The Vault CSI Provider requires the [CSI Secret Store](https://github.com/kubernetes-sigs/secrets-store-csi-driver)
Driver to be installed.
## Overview
At a high level, the CSI Secrets Store driver allows users to create `SecretProviderClass` objects.
This object defines which secret provider to use and what secrets to retrieve. When pods requesting CSI volumes
are created, the CSI Secrets Store driver will send the request to the Vault CSI Provider if the provider
is `vault`. The Vault CSI Provider will then use Secret Provider Class specified and the pod's service account to retrieve
the secrets from Vault, and mount them into the pod's CSI volume.
The secret is retrieved from Vault and populated to the CSI secrets store volume during the `ContainerCreation` phase.
This means that pods will be blocked from starting until the secrets have been read from Vault and written to the volume.
### Features
The following features are supported by the Vault CSI Provider:
- All Vault secret engines supported.
- Authentication using the requesting pod's service account.
- TLS/mTLS communications with Vault.
- Rendering Vault secrets to files.
- Dynamic lease caching and renewal performed by Agent.
- Syncing secrets to Kubernetes secrets to be used as environment variables.
- Installation via [Vault Helm](/vault/docs/platform/k8s/helm)
@include 'kubernetes-supported-versions.mdx'
## Authenticating with Vault
The Vault CSI Provider will authenticate with Vault as the service account of the
pod that mounts the CSI volume. [Kubernetes](/vault/docs/auth/kubernetes) and
[JWT](/vault/docs/auth/jwt) auth methods are supported. The pod's service account
must be bound to a Vault role and a policy granting access to the secrets desired.
It is highly recommended to run pods with dedicated Kubernetes service accounts to
ensure applications cannot access more secrets than they require.
## Secret provider class example
The following is an example of a Secret Provider Class using the `vault` provider:
```yaml
---
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: vault-db-creds
spec:
# Vault CSI Provider
provider: vault
parameters:
# Vault role name to use during login
roleName: 'app'
# Vault address and TLS connection config is normally best configured by the
# helm chart, but can be overridden per SecretProviderClass:
# Vault's hostname
#vaultAddress: 'https://vault:8200'
# TLS CA certification for validation
#vaultCACertPath: '/vault/tls/ca.crt'
objects: |
- objectName: "dbUsername"
secretPath: "database/creds/db-app"
secretKey: "username"
- objectName: "dbPassword"
secretPath: "database/creds/db-app"
secretKey: "password"
# "objectName" is an alias used within the SecretProviderClass to reference
# that specific secret. This will also be the filename containing the secret.
# "secretPath" is the path in Vault where the secret should be retrieved.
# "secretKey" is the key within the Vault secret response to extract a value from.
```
~> Secret Provider Class is a namespaced object in Kubernetes.
## Using secret provider classes
An application pod uses the example Secret Provider Class above by mounting it as a CSI volume:
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: demo
spec:
selector:
matchLabels:
app: demo
replicas: 1
template:
spec:
serviceAccountName: app
containers:
- name: app
image: my-app:1.0.0
volumeMounts:
- name: 'vault-db-creds'
mountPath: '/mnt/secrets-store'
readOnly: true
volumes:
- name: vault-db-creds
csi:
driver: 'secrets-store.csi.k8s.io'
readOnly: true
volumeAttributes:
secretProviderClass: 'vault-db-creds'
```
In this example `volumes.csi` is created on the application deployment and references
the Secret Provider Class named `vault-db-creds`.
## Tutorial
Refer to the [Vault CSI Provider](/vault/tutorials/kubernetes/kubernetes-secret-store-driver)
tutorial to learn how to set up Vault and its dependencies with a Helm chart. | vault | layout docs page title Vault CSI Provider description The Vault CSI Provider allows pods to consume Vault secrets using CSI volumes Vault CSI provider The Vault CSI Provider allows pods to consume Vault secrets using CSI Secrets Store https github com kubernetes sigs secrets store csi driver volumes The Vault CSI Provider requires the CSI Secret Store https github com kubernetes sigs secrets store csi driver Driver to be installed Overview At a high level the CSI Secrets Store driver allows users to create SecretProviderClass objects This object defines which secret provider to use and what secrets to retrieve When pods requesting CSI volumes are created the CSI Secrets Store driver will send the request to the Vault CSI Provider if the provider is vault The Vault CSI Provider will then use Secret Provider Class specified and the pod s service account to retrieve the secrets from Vault and mount them into the pod s CSI volume The secret is retrieved from Vault and populated to the CSI secrets store volume during the ContainerCreation phase This means that pods will be blocked from starting until the secrets have been read from Vault and written to the volume Features The following features are supported by the Vault CSI Provider All Vault secret engines supported Authentication using the requesting pod s service account TLS mTLS communications with Vault Rendering Vault secrets to files Dynamic lease caching and renewal performed by Agent Syncing secrets to Kubernetes secrets to be used as environment variables Installation via Vault Helm vault docs platform k8s helm include kubernetes supported versions mdx Authenticating with Vault The Vault CSI Provider will authenticate with Vault as the service account of the pod that mounts the CSI volume Kubernetes vault docs auth kubernetes and JWT vault docs auth jwt auth methods are supported The pod s service account must be bound to a Vault role and a policy granting access to the secrets desired It is highly recommended to run pods with dedicated Kubernetes service accounts to ensure applications cannot access more secrets than they require Secret provider class example The following is an example of a Secret Provider Class using the vault provider yaml apiVersion secrets store csi x k8s io v1alpha1 kind SecretProviderClass metadata name vault db creds spec Vault CSI Provider provider vault parameters Vault role name to use during login roleName app Vault address and TLS connection config is normally best configured by the helm chart but can be overridden per SecretProviderClass Vault s hostname vaultAddress https vault 8200 TLS CA certification for validation vaultCACertPath vault tls ca crt objects objectName dbUsername secretPath database creds db app secretKey username objectName dbPassword secretPath database creds db app secretKey password objectName is an alias used within the SecretProviderClass to reference that specific secret This will also be the filename containing the secret secretPath is the path in Vault where the secret should be retrieved secretKey is the key within the Vault secret response to extract a value from Secret Provider Class is a namespaced object in Kubernetes Using secret provider classes An application pod uses the example Secret Provider Class above by mounting it as a CSI volume yaml apiVersion apps v1 kind Deployment metadata name app labels app demo spec selector matchLabels app demo replicas 1 template spec serviceAccountName app containers name app image my app 1 0 0 volumeMounts name vault db creds mountPath mnt secrets store readOnly true volumes name vault db creds csi driver secrets store csi k8s io readOnly true volumeAttributes secretProviderClass vault db creds In this example volumes csi is created on the application deployment and references the Secret Provider Class named vault db creds Tutorial Refer to the Vault CSI Provider vault tutorials kubernetes kubernetes secret store driver tutorial to learn how to set up Vault and its dependencies with a Helm chart |
vault This section documents the configurables for the Vault CSI Provider page title Vault CSI Provider Configurations Most settings support being set by in ascending order of precedence The following command line arguments are supported by the Vault CSI provider layout docs Command line arguments | ---
layout: docs
page_title: Vault CSI Provider Configurations
description: This section documents the configurables for the Vault CSI Provider.
---
# Command line arguments
The following command line arguments are supported by the Vault CSI provider.
Most settings support being set by, in ascending order of precedence:
- Environment variables
- Command line arguments
- Secret Provider Class parameters
If installing via the helm chart, they can be set using e.g.
`--set "csi.extraArgs={-debug=true}"`.
- `-cache-size` `(int: 1000)` - Set the maximum number of Vault tokens that will
be cached in-memory. One Vault token will be stored for each pod on the same
node that mounts secrets. Setting to 0 will disable the cache and force each
volume mount request to reauthenticate to Vault.
- `-debug` `(bool: false)` - Set to true to enable debug level logging.
- `-endpoint` `(string: "/tmp/vault.sock")` - Path to unix socket on which the
provider will listen for gRPC calls from the driver.
- `-health-addr` `(string: ":8080")` - The address of the HTTP listener
for reporting health.
- `-hmac-secret-name` `(string: "vault-csi-provider-hmac-key")` - Configure the
Kubernetes secret name that the provider creates to store an HMAC key for
generating secret version hashes.
- `-vault-addr` `(string: "https://127.0.0.1:8200")` - Default address
for connecting to Vault. Can also be specified via the `VAULT_ADDR` environment
variable. **Note:** It is highly recommended to only set the Vault address when
installing the helm chart. The helm chart will install Vault Agent as a sidecar
to the Vault CSI Provider for caching and renewals, but setting `-vault-addr`
here will cause the Vault CSI Provider to bypass the Agent's cache.
- `-vault-mount` `(string: "kubernetes")` - Default Vault mount path
for Kubernetes authentication. Can be overridden per Secret Provider Class
object.
- `-vault-namespace` `(string: "")` - (v1.1.0+) Default Vault namespace for Vault
requests. Can also be specified via the `VAULT_NAMESPACE` environment variable.
- `-vault-tls-ca-cert` `(string: "")` - (v1.1.0+) Path on disk to a single
PEM-encoded CA certificate to trust for Vault. Takes precedence over
`-vault-tls-ca-directory`. Can also be specified via the `VAULT_CACERT`
environment variable.
- `-vault-tls-ca-directory` `(string: "")` - (v1.1.0+) Path on disk to a
directory of PEM-encoded CA certificates to trust for Vault. Can also be
specified via the `VAULT_CAPATH` environment variable.
- `-vault-tls-server-name` `(string: "")` - (v1.1.0+) Name to use as the SNI
host when connecting to Vault via TLS. Can also be specified via the
`VAULT_TLS_SERVER_NAME` environment variable.
- `-vault-tls-client-cert` `(string: "")` - (v1.1.0+) Path on disk to a
PEM-encoded client certificate for mTLS communication with Vault. If set,
also requires `-vault-tls-client-key`. Can also be specified via the
`VAULT_CLIENT_CERT` environment variable.
- `-vault-tls-client-key` `(string: "")` - (v1.1.0+) Path on disk to a
PEM-encoded client key for mTLS communication with Vault. If set, also
requires `-vault-tls-client-cert`. Can also be specified via the
`VAULT_CLIENT_KEY` environment variable.
- `-vault-tls-skip-verify` `(bool: false)` - (v1.1.0+) Disable verification of
TLS certificates. Can also be specified via the `VAULT_SKIP_VERIFY` environment
variable.
- `-version` `(bool: false)` - print version information and exit.
# Secret provider class parameters
The following parameters are supported by the Vault provider. Each parameter is
an entry under `spec.parameters` in a SecretProviderClass object. The full
structure is illustrated in the [examples](/vault/docs/platform/k8s/csi/examples).
- `roleName` `(string: "")` - Name of the role to be used during login with Vault.
- `vaultAddress` `(string: "")` - The address of the Vault server. **Note:** It is
highly recommended to only set the Vault address when installing the helm chart.
The helm chart will install Vault Agent as a sidecar to the Vault CSI Provider
for caching and renewals, but setting `vaultAddress` here will cause the Vault
CSI Provider to bypass the Agent's cache.
- `vaultNamespace` `(string: "")` - The Vault [namespace](/vault/docs/enterprise/namespaces) to use.
- `vaultSkipTLSVerify` `(string: "false")` - When set to true, skips verification of the Vault server
certificate. Setting this to true is not recommended for production.
- `vaultCACertPath` `(string: "")` - The path on disk where the Vault CA certificate can be found
when verifying the Vault server certificate.
- `vaultCADirectory` `(string: "")` - The directory on disk where the Vault CA certificate can be found
when verifying the Vault server certificate.
- `vaultTLSClientCertPath` `(string: "")` - The path on disk where the client certificate can be found
for mTLS communications with Vault.
- `vaultTLSClientKeyPath` `(string: "")` - The path on disk where the client key can be found
for mTLS communications with Vault.
- `vaultTLSServerName` `(string: "")` - The name to use as the SNI host when connecting via TLS.
- `vaultAuthMountPath` `(string: "kubernetes")` - The name of the auth mount used for login.
Can be a Kubernetes or JWT auth mount. Mutually exclusive with `vaultKubernetesMountPath`.
- `vaultKubernetesMountPath` `(string: "kubernetes")` - The name of the auth mount used for login.
Can be a Kubernetes or JWT auth mount. Mutually exclusive with `vaultAuthMountPath`.
- `audience` `(string: "")` - Specifies a custom audience for the requesting pod's service account token,
generated using the
[TokenRequest API](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/#TokenRequestSpec).
The resulting token is used to authenticate to Vault, so if you specify an
[audience](/vault/api-docs/auth/kubernetes#audience) for your Kubernetes auth
role, it must match the audience specified here. If not set, the token audiences will default to
the Kubernetes cluster's default API audiences.
- `objects` `(array)` - An array of secrets to retrieve from Vault.
- `objectName` `(string: "")` - The alias of the object which can be referenced within the secret provider class and
the name of the secret file.
- `method` `(string: "GET")` - The type of HTTP request. Supported values include "GET" and "PUT".
- `secretPath` `(string: "")` - The path in Vault where the secret is located.
For secrets that are retrieved via HTTP GET method, the `secretPath` can include optional URI parameters,
for example, the [version of the KV2 secret](/vault/api-docs/secret/kv/kv-v2#read-secret-version):
```yaml
objects: |
- objectName: "app-secret"
secretPath: "secret/data/test?version=1"
secretKey: "password"
```
- `secretKey` `(string: "")` - The key in the Vault secret to extract. If omitted, the whole response from Vault will be written as JSON.
- `filePermission` `(integer: 0o644)` - The file permissions to set for this secret's file.
- `encoding` `(string: "utf-8")` - The encoding of the secret value. Supports decoding `utf-8` (default), `hex`, and `base64` values.
- `secretArgs` `(map: {})` - Additional arguments to be sent to Vault for a specific secret. Arguments can vary
for different secret engines. For example:
```yaml
secretArgs:
common_name: 'test.example.com'
ttl: '24h'
```
~> `secretArgs` are sent as part of the HTTP request body. Therefore, they are only effective for HTTP PUT/POST requests, for instance,
the [request used to generate a new certificate](/vault/api-docs/secret/pki#generate-certificate).
To supply additional parameters for secrets retrieved via HTTP GET, include optional URI parameters in [`secretPath`](#secretpath). | vault | layout docs page title Vault CSI Provider Configurations description This section documents the configurables for the Vault CSI Provider Command line arguments The following command line arguments are supported by the Vault CSI provider Most settings support being set by in ascending order of precedence Environment variables Command line arguments Secret Provider Class parameters If installing via the helm chart they can be set using e g set csi extraArgs debug true cache size int 1000 Set the maximum number of Vault tokens that will be cached in memory One Vault token will be stored for each pod on the same node that mounts secrets Setting to 0 will disable the cache and force each volume mount request to reauthenticate to Vault debug bool false Set to true to enable debug level logging endpoint string tmp vault sock Path to unix socket on which the provider will listen for gRPC calls from the driver health addr string 8080 The address of the HTTP listener for reporting health hmac secret name string vault csi provider hmac key Configure the Kubernetes secret name that the provider creates to store an HMAC key for generating secret version hashes vault addr string https 127 0 0 1 8200 Default address for connecting to Vault Can also be specified via the VAULT ADDR environment variable Note It is highly recommended to only set the Vault address when installing the helm chart The helm chart will install Vault Agent as a sidecar to the Vault CSI Provider for caching and renewals but setting vault addr here will cause the Vault CSI Provider to bypass the Agent s cache vault mount string kubernetes Default Vault mount path for Kubernetes authentication Can be overridden per Secret Provider Class object vault namespace string v1 1 0 Default Vault namespace for Vault requests Can also be specified via the VAULT NAMESPACE environment variable vault tls ca cert string v1 1 0 Path on disk to a single PEM encoded CA certificate to trust for Vault Takes precedence over vault tls ca directory Can also be specified via the VAULT CACERT environment variable vault tls ca directory string v1 1 0 Path on disk to a directory of PEM encoded CA certificates to trust for Vault Can also be specified via the VAULT CAPATH environment variable vault tls server name string v1 1 0 Name to use as the SNI host when connecting to Vault via TLS Can also be specified via the VAULT TLS SERVER NAME environment variable vault tls client cert string v1 1 0 Path on disk to a PEM encoded client certificate for mTLS communication with Vault If set also requires vault tls client key Can also be specified via the VAULT CLIENT CERT environment variable vault tls client key string v1 1 0 Path on disk to a PEM encoded client key for mTLS communication with Vault If set also requires vault tls client cert Can also be specified via the VAULT CLIENT KEY environment variable vault tls skip verify bool false v1 1 0 Disable verification of TLS certificates Can also be specified via the VAULT SKIP VERIFY environment variable version bool false print version information and exit Secret provider class parameters The following parameters are supported by the Vault provider Each parameter is an entry under spec parameters in a SecretProviderClass object The full structure is illustrated in the examples vault docs platform k8s csi examples roleName string Name of the role to be used during login with Vault vaultAddress string The address of the Vault server Note It is highly recommended to only set the Vault address when installing the helm chart The helm chart will install Vault Agent as a sidecar to the Vault CSI Provider for caching and renewals but setting vaultAddress here will cause the Vault CSI Provider to bypass the Agent s cache vaultNamespace string The Vault namespace vault docs enterprise namespaces to use vaultSkipTLSVerify string false When set to true skips verification of the Vault server certificate Setting this to true is not recommended for production vaultCACertPath string The path on disk where the Vault CA certificate can be found when verifying the Vault server certificate vaultCADirectory string The directory on disk where the Vault CA certificate can be found when verifying the Vault server certificate vaultTLSClientCertPath string The path on disk where the client certificate can be found for mTLS communications with Vault vaultTLSClientKeyPath string The path on disk where the client key can be found for mTLS communications with Vault vaultTLSServerName string The name to use as the SNI host when connecting via TLS vaultAuthMountPath string kubernetes The name of the auth mount used for login Can be a Kubernetes or JWT auth mount Mutually exclusive with vaultKubernetesMountPath vaultKubernetesMountPath string kubernetes The name of the auth mount used for login Can be a Kubernetes or JWT auth mount Mutually exclusive with vaultAuthMountPath audience string Specifies a custom audience for the requesting pod s service account token generated using the TokenRequest API https kubernetes io docs reference kubernetes api authentication resources token request v1 TokenRequestSpec The resulting token is used to authenticate to Vault so if you specify an audience vault api docs auth kubernetes audience for your Kubernetes auth role it must match the audience specified here If not set the token audiences will default to the Kubernetes cluster s default API audiences objects array An array of secrets to retrieve from Vault objectName string The alias of the object which can be referenced within the secret provider class and the name of the secret file method string GET The type of HTTP request Supported values include GET and PUT secretPath string The path in Vault where the secret is located For secrets that are retrieved via HTTP GET method the secretPath can include optional URI parameters for example the version of the KV2 secret vault api docs secret kv kv v2 read secret version yaml objects objectName app secret secretPath secret data test version 1 secretKey password secretKey string The key in the Vault secret to extract If omitted the whole response from Vault will be written as JSON filePermission integer 0o644 The file permissions to set for this secret s file encoding string utf 8 The encoding of the secret value Supports decoding utf 8 default hex and base64 values secretArgs map Additional arguments to be sent to Vault for a specific secret Arguments can vary for different secret engines For example yaml secretArgs common name test example com ttl 24h secretArgs are sent as part of the HTTP request body Therefore they are only effective for HTTP PUT POST requests for instance the request used to generate a new certificate vault api docs secret pki generate certificate To supply additional parameters for secrets retrieved via HTTP GET include optional URI parameters in secretPath secretpath |
vault The Vault CSI Provider can be installed using Vault Helm Installing the Vault CSI provider layout docs Prerequisites page title Vault CSI Provider Installation | ---
layout: docs
page_title: Vault CSI Provider Installation
description: The Vault CSI Provider can be installed using Vault Helm.
---
# Installing the Vault CSI provider
## Prerequisites
- Kubernetes 1.16+ for both the master and worker nodes (Linux-only)
- [Secrets store CSI driver](https://secrets-store-csi-driver.sigs.k8s.io/getting-started/installation.html) installed
- `TokenRequest` endpoint available, which requires setting the flags
`--service-account-signing-key-file` and `--service-account-issuer` for
`kube-apiserver`. Set by default from 1.20+ and earlier in most managed services.
## Installation using helm
The [Vault Helm chart](/vault/docs/platform/k8s/helm) is the recommended way to
install and configure the Vault CSI Provider in Kubernetes.
To install a new instance of Vault and the Vault CSI Provider, first add the
HashiCorp helm repository and ensure you have access to the chart:
~> **Note:** Vault CSI Provider Helm installation requires Vault Helm 0.10.0+.
@include 'helm/repo.mdx'
Then install the chart and enable the CSI feature by setting the
`csi.enabled` value to `true`:
~> **Note:** this will also install the Vault server and Agent Injector.
```shell-session
$ helm install vault hashicorp/vault --set="csi.enabled=true"
```
Upgrades may be performed with `helm upgrade` on an existing installation. Please
always run Helm with `--dry-run` before any install or upgrade to verify
changes.
You can see all the available values settings by running `helm inspect values hashicorp/vault` or by reading the [Vault Helm Configuration
Docs](/vault/docs/platform/k8s/helm/configuration). Commonly used values in the Helm
chart include limiting the namespaces the Vault CSI Provider runs in, TLS options and
more.
## Installation on OpenShift
We recommend using the [Vault agent injector on Openshift](/vault/docs/platform/k8s/helm/openshift)
instead of the Secrets Store CSI driver. OpenShift
[does not recommend](https://docs.openshift.com/container-platform/4.9/storage/persistent_storage/persistent-storage-hostpath.html)
using `hostPath` mounting in production or
[certify Helm charts](https://github.com/redhat-certification/chart-verifier/blob/dbf89bff2d09142e4709d689a9f4037a739c2244/docs/helm-chart-checks.md#table-2-helm-chart-default-checks)
using CSI objects because pods must run as privileged. Pods will have elevated access to
other pods on the same node, which OpenShift does not recommend.
You can run the Secrets Store CSI driver with additional
security configurations on a OpenShift development
or testing cluster.
Deploy the Secrets Store CSI driver and Vault Helm chart
to your OpenShift cluster.
Then, patch the `DaemonSet` for the Vault CSI provider to
run with a privileged security context.
```shell-session
$ kubectl patch daemonset vault-csi-provider \
--type='json' \
--patch='[{"op": "add", "path": "/spec/template/spec/containers/0/securityContext", "value": {"privileged": true} }]'
```
The Secrets Store CSI driver and Vault CSI provider need `hostPath` mount access.
Add the service account for the Secrets Store CSI driver to the `privileged`
[security context constraint](https://cloud.redhat.com/blog/managing-sccs-in-openshift).
```shell-session
$ oc adm policy add-scc-to-user privileged system:serviceaccount:${KUBERNETES_VAULT_NAMESPACE}:secrets-store-csi-driver
```
Add the service account for the Vault CSI provider to the `privileged`
security context constraint.
```shell-session
$ oc adm policy add-scc-to-user privileged system:serviceaccount:${KUBERNETES_VAULT_NAMESPACE}:vault-csi-provider
```
You need to give additional access to the application retrieving secrets with the Vault CSI provider.
Create a `SecurityContextConstraint` to `allowHostDirVolumePlugin`, `allowHostDirVolumePlugin`, and
`allowHostPorts` for the application's service account.
You can adjust the other attributes based on your application's runtime configuration.
```shell-session
$ cat > application-scc.yaml << EOF
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: vault-csi-provider
allowPrivilegedContainer: false
allowHostDirVolumePlugin: true
allowHostNetwork: true
allowHostPorts: true
allowHostIPC: false
allowHostPID: false
readOnlyRootFilesystem: false
defaultAddCapabilities:
- SYS_ADMIN
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
users:
- system:serviceaccount:${KUBERNETES_APPLICATION_NAMESPACE}:${APPLICATION_SERVICE_ACCOUNT}
EOF
```
Add the security context constraint for the application.
```shell-session
$ kubectl apply -f application-scc.yaml
``` | vault | layout docs page title Vault CSI Provider Installation description The Vault CSI Provider can be installed using Vault Helm Installing the Vault CSI provider Prerequisites Kubernetes 1 16 for both the master and worker nodes Linux only Secrets store CSI driver https secrets store csi driver sigs k8s io getting started installation html installed TokenRequest endpoint available which requires setting the flags service account signing key file and service account issuer for kube apiserver Set by default from 1 20 and earlier in most managed services Installation using helm The Vault Helm chart vault docs platform k8s helm is the recommended way to install and configure the Vault CSI Provider in Kubernetes To install a new instance of Vault and the Vault CSI Provider first add the HashiCorp helm repository and ensure you have access to the chart Note Vault CSI Provider Helm installation requires Vault Helm 0 10 0 include helm repo mdx Then install the chart and enable the CSI feature by setting the csi enabled value to true Note this will also install the Vault server and Agent Injector shell session helm install vault hashicorp vault set csi enabled true Upgrades may be performed with helm upgrade on an existing installation Please always run Helm with dry run before any install or upgrade to verify changes You can see all the available values settings by running helm inspect values hashicorp vault or by reading the Vault Helm Configuration Docs vault docs platform k8s helm configuration Commonly used values in the Helm chart include limiting the namespaces the Vault CSI Provider runs in TLS options and more Installation on OpenShift We recommend using the Vault agent injector on Openshift vault docs platform k8s helm openshift instead of the Secrets Store CSI driver OpenShift does not recommend https docs openshift com container platform 4 9 storage persistent storage persistent storage hostpath html using hostPath mounting in production or certify Helm charts https github com redhat certification chart verifier blob dbf89bff2d09142e4709d689a9f4037a739c2244 docs helm chart checks md table 2 helm chart default checks using CSI objects because pods must run as privileged Pods will have elevated access to other pods on the same node which OpenShift does not recommend You can run the Secrets Store CSI driver with additional security configurations on a OpenShift development or testing cluster Deploy the Secrets Store CSI driver and Vault Helm chart to your OpenShift cluster Then patch the DaemonSet for the Vault CSI provider to run with a privileged security context shell session kubectl patch daemonset vault csi provider type json patch op add path spec template spec containers 0 securityContext value privileged true The Secrets Store CSI driver and Vault CSI provider need hostPath mount access Add the service account for the Secrets Store CSI driver to the privileged security context constraint https cloud redhat com blog managing sccs in openshift shell session oc adm policy add scc to user privileged system serviceaccount KUBERNETES VAULT NAMESPACE secrets store csi driver Add the service account for the Vault CSI provider to the privileged security context constraint shell session oc adm policy add scc to user privileged system serviceaccount KUBERNETES VAULT NAMESPACE vault csi provider You need to give additional access to the application retrieving secrets with the Vault CSI provider Create a SecurityContextConstraint to allowHostDirVolumePlugin allowHostDirVolumePlugin and allowHostPorts for the application s service account You can adjust the other attributes based on your application s runtime configuration shell session cat application scc yaml EOF apiVersion security openshift io v1 kind SecurityContextConstraints metadata name vault csi provider allowPrivilegedContainer false allowHostDirVolumePlugin true allowHostNetwork true allowHostPorts true allowHostIPC false allowHostPID false readOnlyRootFilesystem false defaultAddCapabilities SYS ADMIN runAsUser type RunAsAny seLinuxContext type RunAsAny fsGroup type RunAsAny users system serviceaccount KUBERNETES APPLICATION NAMESPACE APPLICATION SERVICE ACCOUNT EOF Add the security context constraint for the application shell session kubectl apply f application scc yaml |
vault commit SHA 08a6e5071ffa4faa486bd4b2c53b27585da4680c Configuration for the Vault Secrets Operator Helm chart layout docs DO NOT EDIT page title Vault Secrets Operator Helm Chart Configuration Generated from chart values yaml in the vault secrets operator repo | ---
layout: docs
page_title: Vault Secrets Operator Helm Chart Configuration
description: >-
Configuration for the Vault Secrets Operator Helm chart.
---
<!-- DO NOT EDIT.
Generated from chart/values.yaml in the vault-secrets-operator repo.
commit SHA=08a6e5071ffa4faa486bd4b2c53b27585da4680c
To update run 'make gen-helm-docs' from the vault-secrets-operator repo.
-->
# Vault Secrets Operator helm chart
The chart is customizable using
[Helm configuration values](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing).
<!-- codegen: start -->
## Top-Level Stanzas
Use these links to navigate to a particular top-level stanza.
- [`controller`](#h-controller)
- [`metricsService`](#h-metricsservice)
- [`defaultVaultConnection`](#h-defaultvaultconnection)
- [`defaultAuthMethod`](#h-defaultauthmethod)
- [`telemetry`](#h-telemetry)
- [`hooks`](#h-hooks)
- [`tests`](#h-tests)
## All Values
### controller ((#h-controller))
- `controller` ((#v-controller)) - Top level configuration for the vault secrets operator deployment.
This consists of a controller and a kube rbac proxy container.
- `replicas` ((#v-controller-replicas)) (`integer: 1`) - Set the number of replicas for the operator.
- `strategy` ((#v-controller-strategy)) (`object: ""`) - Configure update strategy for multi-replica deployments.
Kubernetes supports types Recreate, and RollingUpdate
ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
Example:
strategy: {}
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
- `hostAliases` ((#v-controller-hostaliases)) (`array<map>`) - Host Aliases settings for vault-secrets-operator pod.
The value is an array of PodSpec HostAlias maps.
ref: https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/
Example:
hostAliases:
- ip: 192.168.1.100
hostnames:
- vault.example.com
- `nodeSelector` ((#v-controller-nodeselector)) (`map`) - nodeSelector labels for vault-secrets-operator pod assignment.
ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
Example:
nodeSelector:
beta.kubernetes.io/arch: amd64
- `tolerations` ((#v-controller-tolerations)) (`array<map>`) - Toleration Settings for vault-secrets-operator pod.
The value is an array of PodSpec Toleration maps.
ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
Example:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
- `affinity` ((#v-controller-affinity)) - Affinity settings for vault-secrets-operator pod.
The value is a map of PodSpec Affinity maps.
ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
Example:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- antarctica-east1
- antarctica-west1
- `rbac` ((#v-controller-rbac))
- `clusterRoleAggregation` ((#v-controller-rbac-clusterroleaggregation)) - clusterRoleAggregation defines the roles included in the aggregated ClusterRole.
- `viewerRoles` ((#v-controller-rbac-clusterroleaggregation-viewerroles)) (`array<string>: []`) - viewerRoles is a list of roles that will be aggregated into the viewer ClusterRole.
The role name must be that of any VSO resource type. E.g. "VaultAuth", "HCPAuth".
All values are case-insensitive.
Specifying '*' as the first element will include all roles in the aggregation.
The ClusterRole name takes the form of `<chart-fullname>`-aggregate-role-viewer.
Example usages:
all roles:
- '*'
individually specified roles:
- "VaultAuth"
- "HCPAuth"
- `editorRoles` ((#v-controller-rbac-clusterroleaggregation-editorroles)) (`array<string>: []`) - editorRoles is a list of roles that will be aggregated into the editor ClusterRole.
The role name must be that of any VSO resource type. E.g. "VaultAuth", "HCPAuth".
All values are case-insensitive.
Specifying '*' as the first element will include all roles in the aggregation.
The ClusterRole name takes the form of `<chart-fullname>`-aggregate-role-editor.
Example usages:
all roles:
- '*'
individually specified roles:
- "VaultAuth"
- "HCPAuth"
- `userFacingRoles` ((#v-controller-rbac-clusterroleaggregation-userfacingroles)) (`object: ""`) - userFacingRoles is a map of roles that will be aggregated into the viewer and editor ClusterRoles.
See https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles for more information.
- `view` ((#v-controller-rbac-clusterroleaggregation-userfacingroles-view)) (`boolean: false`) - view controls whether the aggregated viewer ClusterRole will be made available to the user-facing
'view' ClusterRole. Requires the viewerRoles to be set.
- `edit` ((#v-controller-rbac-clusterroleaggregation-userfacingroles-edit)) (`boolean: false`) - view controls whether the aggregated editor ClusterRole will be made available to the user-facing
'edit' ClusterRole. Requires the editorRoles to be set.
- `kubeRbacProxy` ((#v-controller-kuberbacproxy)) - Settings related to the kubeRbacProxy container. This container is an HTTP proxy for the
controller manager which performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.
- `image` ((#v-controller-kuberbacproxy-image)) - Image sets the repo and tag of the kube-rbac-proxy image to use for the controller.
- `pullPolicy` ((#v-controller-kuberbacproxy-image-pullpolicy)) (`string: IfNotPresent`)
- `repository` ((#v-controller-kuberbacproxy-image-repository)) (`string: quay.io/brancz/kube-rbac-proxy`)
- `tag` ((#v-controller-kuberbacproxy-image-tag)) (`string: v0.18.1`)
- `resources` ((#v-controller-kuberbacproxy-resources)) (`map`) - Configures the default resources for the kube rbac proxy container.
For more information on configuring resources, see the K8s documentation:
https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- `limits` ((#v-controller-kuberbacproxy-resources-limits))
- `cpu` ((#v-controller-kuberbacproxy-resources-limits-cpu)) (`string: 500m`)
- `memory` ((#v-controller-kuberbacproxy-resources-limits-memory)) (`string: 128Mi`)
- `requests` ((#v-controller-kuberbacproxy-resources-requests))
- `cpu` ((#v-controller-kuberbacproxy-resources-requests-cpu)) (`string: 5m`)
- `memory` ((#v-controller-kuberbacproxy-resources-requests-memory)) (`string: 64Mi`)
- `imagePullSecrets` ((#v-controller-imagepullsecrets)) (`array<map>`) - Image pull secret to use for private container registry authentication which will be applied to the controllers
service account. Alternatively, the value may be specified as an array of strings.
Example:
```yaml
imagePullSecrets:
- name: pull-secret-name-1
- name: pull-secret-name-2
```
Refer to https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry.
- `extraLabels` ((#v-controller-extralabels)) - Extra labels to attach to the deployment. This should be formatted as a YAML object (map)
- `annotations` ((#v-controller-annotations)) - This value defines additional annotations for the deployment. This should be formatted as a YAML object (map)
- `manager` ((#v-controller-manager)) - Settings related to the vault-secrets-operator container.
- `image` ((#v-controller-manager-image)) - Image sets the repo and tag of the vault-secrets-operator image to use for the controller.
- `pullPolicy` ((#v-controller-manager-image-pullpolicy)) (`string: IfNotPresent`)
- `repository` ((#v-controller-manager-image-repository)) (`string: hashicorp/vault-secrets-operator`)
- `tag` ((#v-controller-manager-image-tag)) (`string: 0.9.0`)
- `logging` ((#v-controller-manager-logging)) - logging
- `level` ((#v-controller-manager-logging-level)) (`string: info`) - Sets the log level for the operator.
Builtin levels are: info, error, debug, debug-extended, trace
Default: info
- `timeEncoding` ((#v-controller-manager-logging-timeencoding)) (`string: rfc3339`) - Sets the time encoding for the operator.
Options are: epoch, millis, nano, iso8601, rfc3339, rfc3339nano
Default: rfc3339
- `stacktraceLevel` ((#v-controller-manager-logging-stacktracelevel)) (`string: panic`) - Sets the stacktrace level for the operator.
Options are: info, error, panic
Default: panic
- `globalTransformationOptions` ((#v-controller-manager-globaltransformationoptions)) - Global secret transformation options. In addition to the boolean options
below, these options may be set via the
`VSO_GLOBAL_TRANSFORMATION_OPTIONS` environment variable as a
comma-separated list. Valid values are: `exclude-raw`
- `excludeRaw` ((#v-controller-manager-globaltransformationoptions-excluderaw)) (`boolean: false`) - excludeRaw directs the operator to prevent _raw secret data being stored
in the destination K8s Secret.
- `globalVaultAuthOptions` ((#v-controller-manager-globalvaultauthoptions)) - Global Vault auth options. In addition to the boolean options
below, these options may be set via the
`VSO_GLOBAL_VAULT_OPTION_OPTIONS` environment variable as a
comma-separated list. Valid values are: `allow-default-globals`
- `allowDefaultGlobals` ((#v-controller-manager-globalvaultauthoptions-allowdefaultglobals)) (`boolean: true`) - allowDefaultGlobals directs the operator search for a "default"
VaultAuthGlobal if none is specified on the referring VaultAuth CR.
Default: true
- `backoffOnSecretSourceError` ((#v-controller-manager-backoffonsecretsourceerror)) (`object: ""`) - Backoff settings for the controller manager. These settings control the backoff behavior
when the controller encounters an error while fetching secrets from the SecretSource.
For example given the following settings:
initialInterval: 5s
maxInterval: 60s
randomizationFactor: 0.5
multiplier: 1.5
The backoff retry sequence might be something like:
5.5s, 7.5s, 11.25s, 16.87s, 25.3125s, 37.96s, 56.95, 60.95s...
- `initialInterval` ((#v-controller-manager-backoffonsecretsourceerror-initialinterval)) (`duration: 5s`) - Initial interval between retries.
- `maxInterval` ((#v-controller-manager-backoffonsecretsourceerror-maxinterval)) (`duration: 60s`) - Maximum interval between retries.
- `maxElapsedTime` ((#v-controller-manager-backoffonsecretsourceerror-maxelapsedtime)) (`duration: 0s`) - Maximum elapsed time without a successful sync from the secret's source.
It's important to note that setting this option to anything other than
its default will result in the secret sync no longer being retried after
reaching the max elapsed time.
- `randomizationFactor` ((#v-controller-manager-backoffonsecretsourceerror-randomizationfactor)) (`float: 0.5`) - Randomization factor randomizes the backoff interval between retries.
This helps to spread out the retries to avoid a thundering herd.
If the value is 0, then the backoff interval will not be randomized.
It is recommended to set this to a value that is greater than 0.
- `multiplier` ((#v-controller-manager-backoffonsecretsourceerror-multiplier)) (`float: 1.5`) - Sets the multiplier that is used to increase the backoff interval between retries.
This value should always be set to a value greater than 0.
The value must be greater than zero.
- `clientCache` ((#v-controller-manager-clientcache)) - Configures the client cache which is used by the controller to cache (and potentially persist) vault tokens that
are the result of using the VaultAuthMethod. This enables re-use of Vault Tokens
throughout their TTLs as well as the ability to renew.
Persistence is only useful in the context of Dynamic Secrets, so "none" is an okay default.
- `persistenceModel` ((#v-controller-manager-clientcache-persistencemodel)) (`string: ""`) - Defines the `-client-cache-persistence-model` which caches+persists vault tokens.
May also be set via the `VSO_CLIENT_CACHE_PERSISTENCE_MODEL` environment variable.
Valid values are:
"none" - in-memory client cache is used, no tokens are persisted.
"direct-unencrypted" - in-memory client cache is persisted, unencrypted. This is NOT recommended for any production workload.
"direct-encrypted" - in-memory client cache is persisted encrypted using the Vault Transit engine.
Note: It is strongly encouraged to not use the setting of "direct-unencrypted" in
production due to the potential of vault tokens being leaked as they would then be stored
in clear text.
default: "none"
- `cacheSize` ((#v-controller-manager-clientcache-cachesize)) (`integer: ""`) - Defines the size of the in-memory LRU cache *in entries*, that is used by the client cache controller.
May also be set via the `VSO_CLIENT_CACHE_SIZE` environment variable.
Larger numbers will increase memory usage by the controller, lower numbers will cause more frequent evictions
of the client cache which can result in additional Vault client counts.
default: 10000
- `storageEncryption` ((#v-controller-manager-clientcache-storageencryption)) - StorageEncryption provides the necessary configuration to encrypt the client storage
cache within Kubernetes objects using (required) Vault Transit Engine.
This should only be configured when client cache persistence with encryption is enabled and
will deploy an additional VaultAuthMethod to be used by the Vault Transit Engine.
E.g. when `controller.manager.clientCache.persistenceModel=direct-encrypted`
Supported Vault authentication methods for the Transit Auth method are: jwt, appRole,
aws, and kubernetes.
Typically, there should only ever be one VaultAuth configured with
StorageEncryption in the Cluster.
- `enabled` ((#v-controller-manager-clientcache-storageencryption-enabled)) (`boolean: false`) - toggles the deployment of the Transit VaultAuthMethod CR.
- `vaultConnectionRef` ((#v-controller-manager-clientcache-storageencryption-vaultconnectionref)) (`string: default`) - Vault Connection Ref to be used by the Transit VaultAuthMethod.
Default setting will use the default VaultConnectionRef, which must also be configured.
- `keyName` ((#v-controller-manager-clientcache-storageencryption-keyname)) (`string: ""`) - KeyName to use for encrypt/decrypt operations via Vault Transit.
- `transitMount` ((#v-controller-manager-clientcache-storageencryption-transitmount)) (`string: ""`) - Mount path for the Transit VaultAuthMethod.
- `namespace` ((#v-controller-manager-clientcache-storageencryption-namespace)) (`string: ""`) - Vault namespace for the Transit VaultAuthMethod CR.
- `method` ((#v-controller-manager-clientcache-storageencryption-method)) (`string: kubernetes`) - Vault Auth method to be used with the Transit VaultAuthMethod CR.
- `mount` ((#v-controller-manager-clientcache-storageencryption-mount)) (`string: kubernetes`) - Mount path for the Transit VaultAuthMethod.
- `kubernetes` ((#v-controller-manager-clientcache-storageencryption-kubernetes)) - Vault Kubernetes auth method specific configuration
- `role` ((#v-controller-manager-clientcache-storageencryption-kubernetes-role)) (`string: ""`) - Vault Auth Role to use
This is a required field and must be setup in Vault prior to deploying the helm chart
if `defaultAuthMethod.enabled=true`
- `serviceAccount` ((#v-controller-manager-clientcache-storageencryption-kubernetes-serviceaccount)) (`string: ""`) - Kubernetes ServiceAccount associated with the Transit Vault Auth Role
Defaults to using the Operator's service-account.
- `tokenAudiences` ((#v-controller-manager-clientcache-storageencryption-kubernetes-tokenaudiences)) (`array<string>: []`) - Token Audience should match the audience of the vault kubernetes auth role.
- `jwt` ((#v-controller-manager-clientcache-storageencryption-jwt)) - Vault JWT auth method specific configuration
- `role` ((#v-controller-manager-clientcache-storageencryption-jwt-role)) (`string: ""`) - Vault Auth Role to use
This is a required field and must be setup in Vault prior to deploying the helm chart
if using JWT for the Transit VaultAuthMethod.
- `secretRef` ((#v-controller-manager-clientcache-storageencryption-jwt-secretref)) (`string: ""`) - One of the following is required prior to deploying the helm chart
- K8s secret that contains the JWT
- K8s service account if a service account JWT is used as a Vault JWT auth token and
needs generating by VSO.
Name of Kubernetes Secret that has the Vault JWT auth token.
The Kubernetes Secret must contain a key named `jwt` which references the JWT token, and
must exist in the namespace of any consuming VaultSecret CR. This is a required field if
a JWT token is provided.
- `serviceAccount` ((#v-controller-manager-clientcache-storageencryption-jwt-serviceaccount)) (`string: default`) - Kubernetes ServiceAccount to generate a service account JWT
- `tokenAudiences` ((#v-controller-manager-clientcache-storageencryption-jwt-tokenaudiences)) (`array<string>: []`) - Token Audience should match the bound_audiences or the `aud` list in bound_claims if
applicable of the Vault JWT auth role.
- `appRole` ((#v-controller-manager-clientcache-storageencryption-approle)) - AppRole auth method specific configuration
- `roleId` ((#v-controller-manager-clientcache-storageencryption-approle-roleid)) (`string: ""`) - AppRole Role's RoleID to use for authenticating to Vault.
This is a required field when using appRole and must be setup in Vault prior to deploying
the helm chart.
- `secretRef` ((#v-controller-manager-clientcache-storageencryption-approle-secretref)) (`string: ""`) - Name of Kubernetes Secret that has the AppRole Role's SecretID used to authenticate with
Vault. The Kubernetes Secret must contain a key named `id` which references the AppRole
Role's SecretID, and must exist in the namespace of any consuming VaultSecret CR.
This is a required field when using appRole and must be setup in Vault prior to
deploying the helm chart.
- `aws` ((#v-controller-manager-clientcache-storageencryption-aws)) - AWS auth method specific configuration
- `role` ((#v-controller-manager-clientcache-storageencryption-aws-role)) (`string: ""`) - Vault Auth Role to use
This is a required field and must be setup in Vault prior to deploying the helm chart
if using the AWS for the Transit auth method.
- `region` ((#v-controller-manager-clientcache-storageencryption-aws-region)) (`string: ""`) - AWS region to use for signing the authentication request
Optional, but most commonly will be the EKS cluster region.
- `headerValue` ((#v-controller-manager-clientcache-storageencryption-aws-headervalue)) (`string: ""`) - Vault header value to include in the STS signing request
- `sessionName` ((#v-controller-manager-clientcache-storageencryption-aws-sessionname)) (`string: ""`) - The role session name to use when creating a WebIdentity provider
- `stsEndpoint` ((#v-controller-manager-clientcache-storageencryption-aws-stsendpoint)) (`string: ""`) - The STS endpoint to use; if not set will use the default
- `iamEndpoint` ((#v-controller-manager-clientcache-storageencryption-aws-iamendpoint)) (`string: ""`) - The IAM endpoint to use; if not set will use the default
- `secretRef` ((#v-controller-manager-clientcache-storageencryption-aws-secretref)) (`string: ""`) - The name of a Kubernetes Secret which holds credentials for AWS. Supported keys
include `access_key_id`, `secret_access_key`, `session_token`
- `irsaServiceAccount` ((#v-controller-manager-clientcache-storageencryption-aws-irsaserviceaccount)) (`string: ""`) - Name of a Kubernetes service account that is configured with IAM Roles
for Service Accounts (IRSA). Should be annotated with "eks.amazonaws.com/role-arn".
- `gcp` ((#v-controller-manager-clientcache-storageencryption-gcp))
- `role` ((#v-controller-manager-clientcache-storageencryption-gcp-role)) (`string: ""`) - Vault Auth Role to use
This is a required field and must be setup in Vault prior to deploying the helm chart
if using GCP for the Transit auth method.
- `workloadIdentityServiceAccount` ((#v-controller-manager-clientcache-storageencryption-gcp-workloadidentityserviceaccount)) (`string: ""`) - Name of a Kubernetes service account that is configured for workload
identity in GKE.
- `region` ((#v-controller-manager-clientcache-storageencryption-gcp-region)) (`string: ""`) - GCP Region of the GKE cluster's identity provider. Defaults to the
region returned from the operator pod's local metadata server if
unspecified.
- `clusterName` ((#v-controller-manager-clientcache-storageencryption-gcp-clustername)) (`string: ""`) - GKE cluster name. Defaults to the cluster-name returned from the
operator pod's local metadata server if unspecified.
- `projectID` ((#v-controller-manager-clientcache-storageencryption-gcp-projectid)) (`string: ""`) - GCP project id. Defaults to the project-id returned from the
operator pod's local metadata server if unspecified.
- `params` ((#v-controller-manager-clientcache-storageencryption-params)) (`map`) - Params to use when authenticating to Vault using this auth method.
params:
param-something1: "foo"
- `headers` ((#v-controller-manager-clientcache-storageencryption-headers)) (` map: ""`) - Headers to be included in all Vault requests.
headers:
X-vault-something1: "foo"
- `maxConcurrentReconciles` ((#v-controller-manager-maxconcurrentreconciles)) (`integer: ""`) - Defines the maximum number of concurrent reconciles for each controller.
May also be set via the `VSO_MAX_CONCURRENT_RECONCILES` environment variable.
default: 100
- `extraEnv` ((#v-controller-manager-extraenv)) (`array<map>`) - Defines additional environment variables to be added to the
vault-secrets-operator manager container.
Example:
```yaml
extraEnv:
- name: HTTP_PROXY
value: http://proxy.example.com
- name: VSO_OUTPUT_FORMAT
value: json
- name: VSO_CLIENT_CACHE_SIZE
value: "20000"
- name: VSO_CLIENT_CACHE_PERSISTENCE_MODEL
value: "direct-encrypted"
- name: VSO_MAX_CONCURRENT_RECONCILES
value: "30"
```
- `extraArgs` ((#v-controller-manager-extraargs)) (`array: []`) - Defines additional commandline arguments to be passed to the
vault-secrets-operator manager container.
- `resources` ((#v-controller-manager-resources)) (`map`) - Configures the default resources for the vault-secrets-operator container.
For more information on configuring resources, see the K8s documentation:
https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- `limits` ((#v-controller-manager-resources-limits))
- `cpu` ((#v-controller-manager-resources-limits-cpu)) (`string: 500m`)
- `memory` ((#v-controller-manager-resources-limits-memory)) (`string: 128Mi`)
- `requests` ((#v-controller-manager-resources-requests))
- `cpu` ((#v-controller-manager-resources-requests-cpu)) (`string: 10m`)
- `memory` ((#v-controller-manager-resources-requests-memory)) (`string: 64Mi`)
- `podSecurityContext` ((#v-controller-podsecuritycontext)) - Configures the Pod Security Context
https://kubernetes.io/docs/tasks/configure-pod-container/security-context
- `runAsNonRoot` ((#v-controller-podsecuritycontext-runasnonroot)) (`boolean: true`)
- `securityContext` ((#v-controller-securitycontext)) - Configures the Container Security Context
https://kubernetes.io/docs/tasks/configure-pod-container/security-context
- `allowPrivilegeEscalation` ((#v-controller-securitycontext-allowprivilegeescalation)) (`boolean: false`)
- `controllerConfigMapYaml` ((#v-controller-controllerconfigmapyaml)) (`map`) - Sets the configuration settings used by the controller. Any custom changes will be reflected in the
data field of the configmap.
For more information on configuring resources, see the K8s documentation:
https://kubernetes.io/docs/concepts/configuration/configmap/
- `health` ((#v-controller-controllerconfigmapyaml-health))
- `healthProbeBindAddress` ((#v-controller-controllerconfigmapyaml-health-healthprobebindaddress)) (`string: :8081`)
- `leaderElection` ((#v-controller-controllerconfigmapyaml-leaderelection))
- `leaderElect` ((#v-controller-controllerconfigmapyaml-leaderelection-leaderelect)) (`boolean: true`)
- `resourceName` ((#v-controller-controllerconfigmapyaml-leaderelection-resourcename)) (`string: b0d477c0.hashicorp.com`)
- `metrics` ((#v-controller-controllerconfigmapyaml-metrics))
- `bindAddress` ((#v-controller-controllerconfigmapyaml-metrics-bindaddress)) (`string: 127.0.0.1:8080`)
- `webhook` ((#v-controller-controllerconfigmapyaml-webhook))
- `port` ((#v-controller-controllerconfigmapyaml-webhook-port)) (`integer: 9443`)
- `kubernetesClusterDomain` ((#v-controller-kubernetesclusterdomain)) (`string: cluster.local`) - Configures the environment variable KUBERNETES_CLUSTER_DOMAIN used by KubeDNS.
- `terminationGracePeriodSeconds` ((#v-controller-terminationgraceperiodseconds)) (`integer: 120`) - Duration in seconds the pod needs to terminate gracefully.
See: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
- `preDeleteHookTimeoutSeconds` ((#v-controller-predeletehooktimeoutseconds)) (`integer: 120`) - Timeout in seconds for the pre-delete hook
### metricsService ((#h-metricsservice))
- `metricsService` ((#v-metricsservice)) (`map`) - Configure the metrics service ports used by the metrics service.
Set the configuration fo the metricsService port.
- `ports` ((#v-metricsservice-ports)) (`map`) - Set the port settings for the metrics service.
For more information on configuring resources, see the K8s documentation:
https://kubernetes.io/docs/concepts/services-networking/service/
- `name` ((#v-metricsservice-ports-name)) (`string: https`)
- `port` ((#v-metricsservice-ports-port)) (`integer: 8443`)
- `protocol` ((#v-metricsservice-ports-protocol)) (`string: TCP`)
- `targetPort` ((#v-metricsservice-ports-targetport)) (`string: https`)
- `type` ((#v-metricsservice-type)) (`string: ClusterIP`)
### defaultVaultConnection ((#h-defaultvaultconnection))
- `defaultVaultConnection` ((#v-defaultvaultconnection)) - Configures the default VaultConnection CR which will be used by resources
if they do not specify a VaultConnection reference. The name is 'default' and will
always be installed in the same namespace as the operator.
NOTE:
* It is strongly recommended to deploy the vault secrets operator in a secure Vault environment
which includes a configuration utilizing TLS and installing Vault into its own restricted namespace.
- `enabled` ((#v-defaultvaultconnection-enabled)) (`boolean: false`) - toggles the deployment of the VaultAuthMethod CR
- `address` ((#v-defaultvaultconnection-address)) (`string: ""`) - Address of the Vault Server
Example: http://vault.default.svc.cluster.local:8200
- `caCertSecret` ((#v-defaultvaultconnection-cacertsecret)) (`string: ""`) - CACertSecret is the name of a Kubernetes secret containing the trusted PEM encoded CA certificate chain as `ca.crt`.
Note: This secret must exist prior to deploying the CR.
- `tlsServerName` ((#v-defaultvaultconnection-tlsservername)) (`string: ""`) - TLSServerName to use as the SNI host for TLS connections.
- `skipTLSVerify` ((#v-defaultvaultconnection-skiptlsverify)) (`boolean: false`) - SkipTLSVerify for TLS connections.
- `headers` ((#v-defaultvaultconnection-headers)) (`map`) - Headers to be included in all Vault requests.
headers:
X-vault-something: "foo"
### defaultAuthMethod ((#h-defaultauthmethod))
- `defaultAuthMethod` ((#v-defaultauthmethod)) - Configures and deploys the default VaultAuthMethod CR which will be used by resources
if they do not specify a VaultAuthMethod reference. The name is 'default' and will
always be installed in the same namespace as the operator.
NOTE:
* It is strongly recommended to deploy the vault secrets operator in a secure Vault environment
which includes a configuration utilizing TLS and installing Vault into its own restricted namespace.
- `enabled` ((#v-defaultauthmethod-enabled)) (`boolean: false`) - toggles the deployment of the VaultAuthMethod CR
- `namespace` ((#v-defaultauthmethod-namespace)) (`string: ""`) - Vault namespace for the VaultAuthMethod CR
- `allowedNamespaces` ((#v-defaultauthmethod-allowednamespaces)) (`array<string>: []`) - Kubernetes namespace glob patterns which are allow-listed for use with the default AuthMethod.
- `method` ((#v-defaultauthmethod-method)) (`string: kubernetes`) - Vault Auth method to be used with the VaultAuthMethod CR
- `mount` ((#v-defaultauthmethod-mount)) (`string: kubernetes`) - Mount path for the Vault Auth Method.
- `kubernetes` ((#v-defaultauthmethod-kubernetes)) - Vault Kubernetes auth method specific configuration
- `role` ((#v-defaultauthmethod-kubernetes-role)) (`string: ""`) - Vault Auth Role to use
This is a required field and must be setup in Vault prior to deploying the helm chart
if `defaultAuthMethod.enabled=true`
- `serviceAccount` ((#v-defaultauthmethod-kubernetes-serviceaccount)) (`string: default`) - Kubernetes ServiceAccount associated with the default Vault Auth Role
- `tokenAudiences` ((#v-defaultauthmethod-kubernetes-tokenaudiences)) (`array<string>: []`) - Token Audience should match the audience of the vault kubernetes auth role.
- `jwt` ((#v-defaultauthmethod-jwt)) - Vault JWT auth method specific configuration
- `role` ((#v-defaultauthmethod-jwt-role)) (`string: ""`) - Vault Auth Role to use
This is a required field and must be setup in Vault prior to deploying the helm chart
if using the JWT for the default auth method.
- `secretRef` ((#v-defaultauthmethod-jwt-secretref)) (`string: ""`) - One of the following is required prior to deploying the helm chart
- K8s secret that contains the JWT
- K8s service account if a service account JWT is used as a Vault JWT auth token and needs generating by VSO
Name of Kubernetes Secret that has the Vault JWT auth token.
The Kubernetes Secret must contain a key named `jwt` which references the JWT token, and must exist in the namespace
of any consuming VaultSecret CR. This is a required field if a JWT token is provided.
- `serviceAccount` ((#v-defaultauthmethod-jwt-serviceaccount)) (`string: default`) - Kubernetes ServiceAccount to generate a service account JWT
- `tokenAudiences` ((#v-defaultauthmethod-jwt-tokenaudiences)) (`array<string>: []`) - Token Audience should match the bound_audiences or the `aud` list in bound_claims if applicable
of the Vault JWT auth role.
- `appRole` ((#v-defaultauthmethod-approle)) - AppRole auth method specific configuration
- `roleId` ((#v-defaultauthmethod-approle-roleid)) (`string: ""`) - AppRole Role's RoleID to use for authenticating to Vault.
This is a required field when using appRole and must be setup in Vault prior to deploying the
helm chart.
- `secretRef` ((#v-defaultauthmethod-approle-secretref)) (`string: ""`) - Name of Kubernetes Secret that has the AppRole Role's SecretID used to authenticate with Vault.
The Kubernetes Secret must contain a key named `id` which references the AppRole Role's
SecretID, and must exist in the namespace of any consuming VaultSecret CR.
This is a required field when using appRole and must be setup in Vault prior to deploying the
helm chart.
- `aws` ((#v-defaultauthmethod-aws)) - AWS auth method specific configuration
- `role` ((#v-defaultauthmethod-aws-role)) (`string: ""`) - Vault Auth Role to use
This is a required field and must be setup in Vault prior to deploying the helm chart
if using the AWS for the default auth method.
- `region` ((#v-defaultauthmethod-aws-region)) (`string: ""`) - AWS region to use for signing the authentication request
Optional, but most commonly will be the region where the EKS cluster is running
- `headerValue` ((#v-defaultauthmethod-aws-headervalue)) (`string: ""`) - Vault header value to include in the STS signing request
- `sessionName` ((#v-defaultauthmethod-aws-sessionname)) (`string: ""`) - The role session name to use when creating a WebIdentity provider
- `stsEndpoint` ((#v-defaultauthmethod-aws-stsendpoint)) (`string: ""`) - The STS endpoint to use; if not set will use the default
- `iamEndpoint` ((#v-defaultauthmethod-aws-iamendpoint)) (`string: ""`) - The IAM endpoint to use; if not set will use the default
- `secretRef` ((#v-defaultauthmethod-aws-secretref)) (`string: ""`) - The name of a Kubernetes Secret which holds credentials for AWS. Supported keys include
`access_key_id`, `secret_access_key`, `session_token`
- `irsaServiceAccount` ((#v-defaultauthmethod-aws-irsaserviceaccount)) (`string: ""`) - Name of a Kubernetes service account that is configured with IAM Roles
for Service Accounts (IRSA). Should be annotated with "eks.amazonaws.com/role-arn".
- `gcp` ((#v-defaultauthmethod-gcp))
- `role` ((#v-defaultauthmethod-gcp-role)) (`string: ""`) - Vault Auth Role to use
This is a required field and must be setup in Vault prior to deploying the helm chart
if using GCP for the Transit auth method.
- `workloadIdentityServiceAccount` ((#v-defaultauthmethod-gcp-workloadidentityserviceaccount)) (`string: ""`) - Name of a Kubernetes service account that is configured for workload
identity in GKE.
- `region` ((#v-defaultauthmethod-gcp-region)) (`string: ""`) - GCP Region of the GKE cluster's identity provider. Defaults to the
region returned from the operator pod's local metadata server if
unspecified.
- `clusterName` ((#v-defaultauthmethod-gcp-clustername)) (`string: ""`) - GKE cluster name. Defaults to the cluster-name returned from the
operator pod's local metadata server if unspecified.
- `projectID` ((#v-defaultauthmethod-gcp-projectid)) (`string: ""`) - GCP project id. Defaults to the project-id returned from the
operator pod's local metadata server if unspecified.
- `params` ((#v-defaultauthmethod-params)) (`map`) - Params to use when authenticating to Vault
params:
param-something1: "foo"
- `headers` ((#v-defaultauthmethod-headers)) (`map`) - Headers to be included in all Vault requests.
headers:
X-vault-something1: "foo"
- `vaultAuthGlobalRef` ((#v-defaultauthmethod-vaultauthglobalref)) - VaultAuthGlobalRef
- `enabled` ((#v-defaultauthmethod-vaultauthglobalref-enabled)) (`boolean: false`) - toggles the inclusion of the VaultAuthGlobal configuration in the
default VaultAuth CR.
- `name` ((#v-defaultauthmethod-vaultauthglobalref-name)) (`string: ""`) - Name of the VaultAuthGlobal CR to reference.
- `namespace` ((#v-defaultauthmethod-vaultauthglobalref-namespace)) (`string: ""`) - Namespace of the VaultAuthGlobal CR to reference.
- `allowDefault` ((#v-defaultauthmethod-vaultauthglobalref-allowdefault)) (`boolean: ""`) - allow default globals
- `mergeStrategy` ((#v-defaultauthmethod-vaultauthglobalref-mergestrategy))
- `headers` ((#v-defaultauthmethod-vaultauthglobalref-mergestrategy-headers)) (`string: none`) - merge strategy for headers
Valid values are: "replace", "merge", "none"
Default: "replace"
- `params` ((#v-defaultauthmethod-vaultauthglobalref-mergestrategy-params)) (`string: none`) - merge strategy for params
Valid values are: "replace", "merge", "none"
Default: "replace"
### telemetry ((#h-telemetry))
- `telemetry` ((#v-telemetry)) - Configures a Prometheus ServiceMonitor
- `serviceMonitor` ((#v-telemetry-servicemonitor))
- `enabled` ((#v-telemetry-servicemonitor-enabled)) (`boolean: false`) - The Prometheus operator *must* be installed before enabling this feature,
if not the chart will fail to install due to missing CustomResourceDefinitions
provided by the operator.
Instructions on how to install the Helm chart can be found here:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
More information can be found here:
https://github.com/prometheus-operator/prometheus-operator
https://github.com/prometheus-operator/kube-prometheus
Enable deployment of the Vault Secrets Operator ServiceMonitor CustomResource.
- `selectors` ((#v-telemetry-servicemonitor-selectors)) (`string: ""`) - Selector labels to add to the ServiceMonitor.
When empty, defaults to:
release: prometheus
- `scheme` ((#v-telemetry-servicemonitor-scheme)) (`string: https`) - Scheme of the service Prometheus scrapes metrics from. This must match the scheme of the metrics service of VSO
- `port` ((#v-telemetry-servicemonitor-port)) (`string: https`) - Port at which Prometheus scrapes metrics. This must match the port of the metrics service of VSO
- `path` ((#v-telemetry-servicemonitor-path)) (`string: /metrics`) - Path at which Prometheus scrapes metrics
- `bearerTokenFile` ((#v-telemetry-servicemonitor-bearertokenfile)) (`string: /var/run/secrets/kubernetes.io/serviceaccount/token`) - File Prometheus reads bearer token from for scraping metrics
- `interval` ((#v-telemetry-servicemonitor-interval)) (`string: 30s`) - Interval at which Prometheus scrapes metrics
- `scrapeTimeout` ((#v-telemetry-servicemonitor-scrapetimeout)) (`string: 10s`) - Timeout for Prometheus scrapes
### hooks ((#h-hooks))
- `hooks` ((#v-hooks)) - Configure the behaviour of Helm hooks.
- `resources` ((#v-hooks-resources)) - Resources common to all hooks.
- `limits` ((#v-hooks-resources-limits))
- `cpu` ((#v-hooks-resources-limits-cpu)) (`string: 500m`)
- `memory` ((#v-hooks-resources-limits-memory)) (`string: 128Mi`)
- `requests` ((#v-hooks-resources-requests))
- `cpu` ((#v-hooks-resources-requests-cpu)) (`string: 10m`)
- `memory` ((#v-hooks-resources-requests-memory)) (`string: 64Mi`)
- `upgradeCRDs` ((#v-hooks-upgradecrds)) - Configure the Helm pre-upgrade hook that handles custom resource definition (CRD) upgrades.
- `enabled` ((#v-hooks-upgradecrds-enabled)) (`boolean: true`) - Set to true to automatically upgrade the CRDs.
Disabling this will require manual intervention to upgrade the CRDs, so it is recommended to
always leave it enabled.
- `backoffLimit` ((#v-hooks-upgradecrds-backofflimit)) (`integer: 5`) - Limit the number of retries for the CRD upgrade.
- `executionTimeout` ((#v-hooks-upgradecrds-executiontimeout)) (`string: 30s`) - Set the timeout for the CRD upgrade. The operation should typically take less than 5s
to complete.
### tests ((#h-tests))
- `tests` ((#v-tests)) - # Used by unit tests, and will not be rendered except when using `helm template`, this can be safely ignored.
- `enabled` ((#v-tests-enabled)) (`boolean: true`)
<!-- codegen: end -->
## Helm chart examples
The below `config.yaml` results in a single replica installation of the Vault Secrets Operator
with a default vault connection and auth method custom resource deployed.
It expects a local Vault installation within the kubernetes cluster
accessible via `http://vault.default.svc.cluster.local:8200` with TLS disabled,
and a [Vault Auth Method](/vault/docs/auth/kubernetes) to be setup against the `default` ServiceAccount.
```yaml
# config.yaml
defaultVaultConnection:
enabled: true
defaultAuthMethod:
enabled: true
```
## Customizing the helm chart
If you need to extend the Helm chart with additional options, we recommend using a third-party tool,
such as [kustomize](https://github.com/kubernetes-sigs/kustomize) using the project repo `config/` path
in the [vault-secrets-operator](https://github.com/hashicorp/vault-secrets-operator) project. | vault | layout docs page title Vault Secrets Operator Helm Chart Configuration description Configuration for the Vault Secrets Operator Helm chart DO NOT EDIT Generated from chart values yaml in the vault secrets operator repo commit SHA 08a6e5071ffa4faa486bd4b2c53b27585da4680c To update run make gen helm docs from the vault secrets operator repo Vault Secrets Operator helm chart The chart is customizable using Helm configuration values https helm sh docs intro using helm customizing the chart before installing codegen start Top Level Stanzas Use these links to navigate to a particular top level stanza controller h controller metricsService h metricsservice defaultVaultConnection h defaultvaultconnection defaultAuthMethod h defaultauthmethod telemetry h telemetry hooks h hooks tests h tests All Values controller h controller controller v controller Top level configuration for the vault secrets operator deployment This consists of a controller and a kube rbac proxy container replicas v controller replicas integer 1 Set the number of replicas for the operator strategy v controller strategy object Configure update strategy for multi replica deployments Kubernetes supports types Recreate and RollingUpdate ref https kubernetes io docs concepts workloads controllers deployment strategy Example strategy rollingUpdate maxSurge 1 maxUnavailable 0 type RollingUpdate hostAliases v controller hostaliases array map Host Aliases settings for vault secrets operator pod The value is an array of PodSpec HostAlias maps ref https kubernetes io docs tasks network customize hosts file for pods Example hostAliases ip 192 168 1 100 hostnames vault example com nodeSelector v controller nodeselector map nodeSelector labels for vault secrets operator pod assignment ref https kubernetes io docs concepts configuration assign pod node nodeselector Example nodeSelector beta kubernetes io arch amd64 tolerations v controller tolerations array map Toleration Settings for vault secrets operator pod The value is an array of PodSpec Toleration maps ref https kubernetes io docs concepts scheduling eviction taint and toleration Example tolerations key key1 operator Equal value value1 effect NoSchedule affinity v controller affinity Affinity settings for vault secrets operator pod The value is a map of PodSpec Affinity maps ref https kubernetes io docs concepts scheduling eviction assign pod node affinity and anti affinity Example affinity nodeAffinity requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchExpressions key topology kubernetes io zone operator In values antarctica east1 antarctica west1 rbac v controller rbac clusterRoleAggregation v controller rbac clusterroleaggregation clusterRoleAggregation defines the roles included in the aggregated ClusterRole viewerRoles v controller rbac clusterroleaggregation viewerroles array string viewerRoles is a list of roles that will be aggregated into the viewer ClusterRole The role name must be that of any VSO resource type E g VaultAuth HCPAuth All values are case insensitive Specifying as the first element will include all roles in the aggregation The ClusterRole name takes the form of chart fullname aggregate role viewer Example usages all roles individually specified roles VaultAuth HCPAuth editorRoles v controller rbac clusterroleaggregation editorroles array string editorRoles is a list of roles that will be aggregated into the editor ClusterRole The role name must be that of any VSO resource type E g VaultAuth HCPAuth All values are case insensitive Specifying as the first element will include all roles in the aggregation The ClusterRole name takes the form of chart fullname aggregate role editor Example usages all roles individually specified roles VaultAuth HCPAuth userFacingRoles v controller rbac clusterroleaggregation userfacingroles object userFacingRoles is a map of roles that will be aggregated into the viewer and editor ClusterRoles See https kubernetes io docs reference access authn authz rbac user facing roles for more information view v controller rbac clusterroleaggregation userfacingroles view boolean false view controls whether the aggregated viewer ClusterRole will be made available to the user facing view ClusterRole Requires the viewerRoles to be set edit v controller rbac clusterroleaggregation userfacingroles edit boolean false view controls whether the aggregated editor ClusterRole will be made available to the user facing edit ClusterRole Requires the editorRoles to be set kubeRbacProxy v controller kuberbacproxy Settings related to the kubeRbacProxy container This container is an HTTP proxy for the controller manager which performs RBAC authorization against the Kubernetes API using SubjectAccessReviews image v controller kuberbacproxy image Image sets the repo and tag of the kube rbac proxy image to use for the controller pullPolicy v controller kuberbacproxy image pullpolicy string IfNotPresent repository v controller kuberbacproxy image repository string quay io brancz kube rbac proxy tag v controller kuberbacproxy image tag string v0 18 1 resources v controller kuberbacproxy resources map Configures the default resources for the kube rbac proxy container For more information on configuring resources see the K8s documentation https kubernetes io docs concepts configuration manage resources containers limits v controller kuberbacproxy resources limits cpu v controller kuberbacproxy resources limits cpu string 500m memory v controller kuberbacproxy resources limits memory string 128Mi requests v controller kuberbacproxy resources requests cpu v controller kuberbacproxy resources requests cpu string 5m memory v controller kuberbacproxy resources requests memory string 64Mi imagePullSecrets v controller imagepullsecrets array map Image pull secret to use for private container registry authentication which will be applied to the controllers service account Alternatively the value may be specified as an array of strings Example yaml imagePullSecrets name pull secret name 1 name pull secret name 2 Refer to https kubernetes io docs concepts containers images using a private registry extraLabels v controller extralabels Extra labels to attach to the deployment This should be formatted as a YAML object map annotations v controller annotations This value defines additional annotations for the deployment This should be formatted as a YAML object map manager v controller manager Settings related to the vault secrets operator container image v controller manager image Image sets the repo and tag of the vault secrets operator image to use for the controller pullPolicy v controller manager image pullpolicy string IfNotPresent repository v controller manager image repository string hashicorp vault secrets operator tag v controller manager image tag string 0 9 0 logging v controller manager logging logging level v controller manager logging level string info Sets the log level for the operator Builtin levels are info error debug debug extended trace Default info timeEncoding v controller manager logging timeencoding string rfc3339 Sets the time encoding for the operator Options are epoch millis nano iso8601 rfc3339 rfc3339nano Default rfc3339 stacktraceLevel v controller manager logging stacktracelevel string panic Sets the stacktrace level for the operator Options are info error panic Default panic globalTransformationOptions v controller manager globaltransformationoptions Global secret transformation options In addition to the boolean options below these options may be set via the VSO GLOBAL TRANSFORMATION OPTIONS environment variable as a comma separated list Valid values are exclude raw excludeRaw v controller manager globaltransformationoptions excluderaw boolean false excludeRaw directs the operator to prevent raw secret data being stored in the destination K8s Secret globalVaultAuthOptions v controller manager globalvaultauthoptions Global Vault auth options In addition to the boolean options below these options may be set via the VSO GLOBAL VAULT OPTION OPTIONS environment variable as a comma separated list Valid values are allow default globals allowDefaultGlobals v controller manager globalvaultauthoptions allowdefaultglobals boolean true allowDefaultGlobals directs the operator search for a default VaultAuthGlobal if none is specified on the referring VaultAuth CR Default true backoffOnSecretSourceError v controller manager backoffonsecretsourceerror object Backoff settings for the controller manager These settings control the backoff behavior when the controller encounters an error while fetching secrets from the SecretSource For example given the following settings initialInterval 5s maxInterval 60s randomizationFactor 0 5 multiplier 1 5 The backoff retry sequence might be something like 5 5s 7 5s 11 25s 16 87s 25 3125s 37 96s 56 95 60 95s initialInterval v controller manager backoffonsecretsourceerror initialinterval duration 5s Initial interval between retries maxInterval v controller manager backoffonsecretsourceerror maxinterval duration 60s Maximum interval between retries maxElapsedTime v controller manager backoffonsecretsourceerror maxelapsedtime duration 0s Maximum elapsed time without a successful sync from the secret s source It s important to note that setting this option to anything other than its default will result in the secret sync no longer being retried after reaching the max elapsed time randomizationFactor v controller manager backoffonsecretsourceerror randomizationfactor float 0 5 Randomization factor randomizes the backoff interval between retries This helps to spread out the retries to avoid a thundering herd If the value is 0 then the backoff interval will not be randomized It is recommended to set this to a value that is greater than 0 multiplier v controller manager backoffonsecretsourceerror multiplier float 1 5 Sets the multiplier that is used to increase the backoff interval between retries This value should always be set to a value greater than 0 The value must be greater than zero clientCache v controller manager clientcache Configures the client cache which is used by the controller to cache and potentially persist vault tokens that are the result of using the VaultAuthMethod This enables re use of Vault Tokens throughout their TTLs as well as the ability to renew Persistence is only useful in the context of Dynamic Secrets so none is an okay default persistenceModel v controller manager clientcache persistencemodel string Defines the client cache persistence model which caches persists vault tokens May also be set via the VSO CLIENT CACHE PERSISTENCE MODEL environment variable Valid values are none in memory client cache is used no tokens are persisted direct unencrypted in memory client cache is persisted unencrypted This is NOT recommended for any production workload direct encrypted in memory client cache is persisted encrypted using the Vault Transit engine Note It is strongly encouraged to not use the setting of direct unencrypted in production due to the potential of vault tokens being leaked as they would then be stored in clear text default none cacheSize v controller manager clientcache cachesize integer Defines the size of the in memory LRU cache in entries that is used by the client cache controller May also be set via the VSO CLIENT CACHE SIZE environment variable Larger numbers will increase memory usage by the controller lower numbers will cause more frequent evictions of the client cache which can result in additional Vault client counts default 10000 storageEncryption v controller manager clientcache storageencryption StorageEncryption provides the necessary configuration to encrypt the client storage cache within Kubernetes objects using required Vault Transit Engine This should only be configured when client cache persistence with encryption is enabled and will deploy an additional VaultAuthMethod to be used by the Vault Transit Engine E g when controller manager clientCache persistenceModel direct encrypted Supported Vault authentication methods for the Transit Auth method are jwt appRole aws and kubernetes Typically there should only ever be one VaultAuth configured with StorageEncryption in the Cluster enabled v controller manager clientcache storageencryption enabled boolean false toggles the deployment of the Transit VaultAuthMethod CR vaultConnectionRef v controller manager clientcache storageencryption vaultconnectionref string default Vault Connection Ref to be used by the Transit VaultAuthMethod Default setting will use the default VaultConnectionRef which must also be configured keyName v controller manager clientcache storageencryption keyname string KeyName to use for encrypt decrypt operations via Vault Transit transitMount v controller manager clientcache storageencryption transitmount string Mount path for the Transit VaultAuthMethod namespace v controller manager clientcache storageencryption namespace string Vault namespace for the Transit VaultAuthMethod CR method v controller manager clientcache storageencryption method string kubernetes Vault Auth method to be used with the Transit VaultAuthMethod CR mount v controller manager clientcache storageencryption mount string kubernetes Mount path for the Transit VaultAuthMethod kubernetes v controller manager clientcache storageencryption kubernetes Vault Kubernetes auth method specific configuration role v controller manager clientcache storageencryption kubernetes role string Vault Auth Role to use This is a required field and must be setup in Vault prior to deploying the helm chart if defaultAuthMethod enabled true serviceAccount v controller manager clientcache storageencryption kubernetes serviceaccount string Kubernetes ServiceAccount associated with the Transit Vault Auth Role Defaults to using the Operator s service account tokenAudiences v controller manager clientcache storageencryption kubernetes tokenaudiences array string Token Audience should match the audience of the vault kubernetes auth role jwt v controller manager clientcache storageencryption jwt Vault JWT auth method specific configuration role v controller manager clientcache storageencryption jwt role string Vault Auth Role to use This is a required field and must be setup in Vault prior to deploying the helm chart if using JWT for the Transit VaultAuthMethod secretRef v controller manager clientcache storageencryption jwt secretref string One of the following is required prior to deploying the helm chart K8s secret that contains the JWT K8s service account if a service account JWT is used as a Vault JWT auth token and needs generating by VSO Name of Kubernetes Secret that has the Vault JWT auth token The Kubernetes Secret must contain a key named jwt which references the JWT token and must exist in the namespace of any consuming VaultSecret CR This is a required field if a JWT token is provided serviceAccount v controller manager clientcache storageencryption jwt serviceaccount string default Kubernetes ServiceAccount to generate a service account JWT tokenAudiences v controller manager clientcache storageencryption jwt tokenaudiences array string Token Audience should match the bound audiences or the aud list in bound claims if applicable of the Vault JWT auth role appRole v controller manager clientcache storageencryption approle AppRole auth method specific configuration roleId v controller manager clientcache storageencryption approle roleid string AppRole Role s RoleID to use for authenticating to Vault This is a required field when using appRole and must be setup in Vault prior to deploying the helm chart secretRef v controller manager clientcache storageencryption approle secretref string Name of Kubernetes Secret that has the AppRole Role s SecretID used to authenticate with Vault The Kubernetes Secret must contain a key named id which references the AppRole Role s SecretID and must exist in the namespace of any consuming VaultSecret CR This is a required field when using appRole and must be setup in Vault prior to deploying the helm chart aws v controller manager clientcache storageencryption aws AWS auth method specific configuration role v controller manager clientcache storageencryption aws role string Vault Auth Role to use This is a required field and must be setup in Vault prior to deploying the helm chart if using the AWS for the Transit auth method region v controller manager clientcache storageencryption aws region string AWS region to use for signing the authentication request Optional but most commonly will be the EKS cluster region headerValue v controller manager clientcache storageencryption aws headervalue string Vault header value to include in the STS signing request sessionName v controller manager clientcache storageencryption aws sessionname string The role session name to use when creating a WebIdentity provider stsEndpoint v controller manager clientcache storageencryption aws stsendpoint string The STS endpoint to use if not set will use the default iamEndpoint v controller manager clientcache storageencryption aws iamendpoint string The IAM endpoint to use if not set will use the default secretRef v controller manager clientcache storageencryption aws secretref string The name of a Kubernetes Secret which holds credentials for AWS Supported keys include access key id secret access key session token irsaServiceAccount v controller manager clientcache storageencryption aws irsaserviceaccount string Name of a Kubernetes service account that is configured with IAM Roles for Service Accounts IRSA Should be annotated with eks amazonaws com role arn gcp v controller manager clientcache storageencryption gcp role v controller manager clientcache storageencryption gcp role string Vault Auth Role to use This is a required field and must be setup in Vault prior to deploying the helm chart if using GCP for the Transit auth method workloadIdentityServiceAccount v controller manager clientcache storageencryption gcp workloadidentityserviceaccount string Name of a Kubernetes service account that is configured for workload identity in GKE region v controller manager clientcache storageencryption gcp region string GCP Region of the GKE cluster s identity provider Defaults to the region returned from the operator pod s local metadata server if unspecified clusterName v controller manager clientcache storageencryption gcp clustername string GKE cluster name Defaults to the cluster name returned from the operator pod s local metadata server if unspecified projectID v controller manager clientcache storageencryption gcp projectid string GCP project id Defaults to the project id returned from the operator pod s local metadata server if unspecified params v controller manager clientcache storageencryption params map Params to use when authenticating to Vault using this auth method params param something1 foo headers v controller manager clientcache storageencryption headers map Headers to be included in all Vault requests headers X vault something1 foo maxConcurrentReconciles v controller manager maxconcurrentreconciles integer Defines the maximum number of concurrent reconciles for each controller May also be set via the VSO MAX CONCURRENT RECONCILES environment variable default 100 extraEnv v controller manager extraenv array map Defines additional environment variables to be added to the vault secrets operator manager container Example yaml extraEnv name HTTP PROXY value http proxy example com name VSO OUTPUT FORMAT value json name VSO CLIENT CACHE SIZE value 20000 name VSO CLIENT CACHE PERSISTENCE MODEL value direct encrypted name VSO MAX CONCURRENT RECONCILES value 30 extraArgs v controller manager extraargs array Defines additional commandline arguments to be passed to the vault secrets operator manager container resources v controller manager resources map Configures the default resources for the vault secrets operator container For more information on configuring resources see the K8s documentation https kubernetes io docs concepts configuration manage resources containers limits v controller manager resources limits cpu v controller manager resources limits cpu string 500m memory v controller manager resources limits memory string 128Mi requests v controller manager resources requests cpu v controller manager resources requests cpu string 10m memory v controller manager resources requests memory string 64Mi podSecurityContext v controller podsecuritycontext Configures the Pod Security Context https kubernetes io docs tasks configure pod container security context runAsNonRoot v controller podsecuritycontext runasnonroot boolean true securityContext v controller securitycontext Configures the Container Security Context https kubernetes io docs tasks configure pod container security context allowPrivilegeEscalation v controller securitycontext allowprivilegeescalation boolean false controllerConfigMapYaml v controller controllerconfigmapyaml map Sets the configuration settings used by the controller Any custom changes will be reflected in the data field of the configmap For more information on configuring resources see the K8s documentation https kubernetes io docs concepts configuration configmap health v controller controllerconfigmapyaml health healthProbeBindAddress v controller controllerconfigmapyaml health healthprobebindaddress string 8081 leaderElection v controller controllerconfigmapyaml leaderelection leaderElect v controller controllerconfigmapyaml leaderelection leaderelect boolean true resourceName v controller controllerconfigmapyaml leaderelection resourcename string b0d477c0 hashicorp com metrics v controller controllerconfigmapyaml metrics bindAddress v controller controllerconfigmapyaml metrics bindaddress string 127 0 0 1 8080 webhook v controller controllerconfigmapyaml webhook port v controller controllerconfigmapyaml webhook port integer 9443 kubernetesClusterDomain v controller kubernetesclusterdomain string cluster local Configures the environment variable KUBERNETES CLUSTER DOMAIN used by KubeDNS terminationGracePeriodSeconds v controller terminationgraceperiodseconds integer 120 Duration in seconds the pod needs to terminate gracefully See https kubernetes io docs concepts containers container lifecycle hooks preDeleteHookTimeoutSeconds v controller predeletehooktimeoutseconds integer 120 Timeout in seconds for the pre delete hook metricsService h metricsservice metricsService v metricsservice map Configure the metrics service ports used by the metrics service Set the configuration fo the metricsService port ports v metricsservice ports map Set the port settings for the metrics service For more information on configuring resources see the K8s documentation https kubernetes io docs concepts services networking service name v metricsservice ports name string https port v metricsservice ports port integer 8443 protocol v metricsservice ports protocol string TCP targetPort v metricsservice ports targetport string https type v metricsservice type string ClusterIP defaultVaultConnection h defaultvaultconnection defaultVaultConnection v defaultvaultconnection Configures the default VaultConnection CR which will be used by resources if they do not specify a VaultConnection reference The name is default and will always be installed in the same namespace as the operator NOTE It is strongly recommended to deploy the vault secrets operator in a secure Vault environment which includes a configuration utilizing TLS and installing Vault into its own restricted namespace enabled v defaultvaultconnection enabled boolean false toggles the deployment of the VaultAuthMethod CR address v defaultvaultconnection address string Address of the Vault Server Example http vault default svc cluster local 8200 caCertSecret v defaultvaultconnection cacertsecret string CACertSecret is the name of a Kubernetes secret containing the trusted PEM encoded CA certificate chain as ca crt Note This secret must exist prior to deploying the CR tlsServerName v defaultvaultconnection tlsservername string TLSServerName to use as the SNI host for TLS connections skipTLSVerify v defaultvaultconnection skiptlsverify boolean false SkipTLSVerify for TLS connections headers v defaultvaultconnection headers map Headers to be included in all Vault requests headers X vault something foo defaultAuthMethod h defaultauthmethod defaultAuthMethod v defaultauthmethod Configures and deploys the default VaultAuthMethod CR which will be used by resources if they do not specify a VaultAuthMethod reference The name is default and will always be installed in the same namespace as the operator NOTE It is strongly recommended to deploy the vault secrets operator in a secure Vault environment which includes a configuration utilizing TLS and installing Vault into its own restricted namespace enabled v defaultauthmethod enabled boolean false toggles the deployment of the VaultAuthMethod CR namespace v defaultauthmethod namespace string Vault namespace for the VaultAuthMethod CR allowedNamespaces v defaultauthmethod allowednamespaces array string Kubernetes namespace glob patterns which are allow listed for use with the default AuthMethod method v defaultauthmethod method string kubernetes Vault Auth method to be used with the VaultAuthMethod CR mount v defaultauthmethod mount string kubernetes Mount path for the Vault Auth Method kubernetes v defaultauthmethod kubernetes Vault Kubernetes auth method specific configuration role v defaultauthmethod kubernetes role string Vault Auth Role to use This is a required field and must be setup in Vault prior to deploying the helm chart if defaultAuthMethod enabled true serviceAccount v defaultauthmethod kubernetes serviceaccount string default Kubernetes ServiceAccount associated with the default Vault Auth Role tokenAudiences v defaultauthmethod kubernetes tokenaudiences array string Token Audience should match the audience of the vault kubernetes auth role jwt v defaultauthmethod jwt Vault JWT auth method specific configuration role v defaultauthmethod jwt role string Vault Auth Role to use This is a required field and must be setup in Vault prior to deploying the helm chart if using the JWT for the default auth method secretRef v defaultauthmethod jwt secretref string One of the following is required prior to deploying the helm chart K8s secret that contains the JWT K8s service account if a service account JWT is used as a Vault JWT auth token and needs generating by VSO Name of Kubernetes Secret that has the Vault JWT auth token The Kubernetes Secret must contain a key named jwt which references the JWT token and must exist in the namespace of any consuming VaultSecret CR This is a required field if a JWT token is provided serviceAccount v defaultauthmethod jwt serviceaccount string default Kubernetes ServiceAccount to generate a service account JWT tokenAudiences v defaultauthmethod jwt tokenaudiences array string Token Audience should match the bound audiences or the aud list in bound claims if applicable of the Vault JWT auth role appRole v defaultauthmethod approle AppRole auth method specific configuration roleId v defaultauthmethod approle roleid string AppRole Role s RoleID to use for authenticating to Vault This is a required field when using appRole and must be setup in Vault prior to deploying the helm chart secretRef v defaultauthmethod approle secretref string Name of Kubernetes Secret that has the AppRole Role s SecretID used to authenticate with Vault The Kubernetes Secret must contain a key named id which references the AppRole Role s SecretID and must exist in the namespace of any consuming VaultSecret CR This is a required field when using appRole and must be setup in Vault prior to deploying the helm chart aws v defaultauthmethod aws AWS auth method specific configuration role v defaultauthmethod aws role string Vault Auth Role to use This is a required field and must be setup in Vault prior to deploying the helm chart if using the AWS for the default auth method region v defaultauthmethod aws region string AWS region to use for signing the authentication request Optional but most commonly will be the region where the EKS cluster is running headerValue v defaultauthmethod aws headervalue string Vault header value to include in the STS signing request sessionName v defaultauthmethod aws sessionname string The role session name to use when creating a WebIdentity provider stsEndpoint v defaultauthmethod aws stsendpoint string The STS endpoint to use if not set will use the default iamEndpoint v defaultauthmethod aws iamendpoint string The IAM endpoint to use if not set will use the default secretRef v defaultauthmethod aws secretref string The name of a Kubernetes Secret which holds credentials for AWS Supported keys include access key id secret access key session token irsaServiceAccount v defaultauthmethod aws irsaserviceaccount string Name of a Kubernetes service account that is configured with IAM Roles for Service Accounts IRSA Should be annotated with eks amazonaws com role arn gcp v defaultauthmethod gcp role v defaultauthmethod gcp role string Vault Auth Role to use This is a required field and must be setup in Vault prior to deploying the helm chart if using GCP for the Transit auth method workloadIdentityServiceAccount v defaultauthmethod gcp workloadidentityserviceaccount string Name of a Kubernetes service account that is configured for workload identity in GKE region v defaultauthmethod gcp region string GCP Region of the GKE cluster s identity provider Defaults to the region returned from the operator pod s local metadata server if unspecified clusterName v defaultauthmethod gcp clustername string GKE cluster name Defaults to the cluster name returned from the operator pod s local metadata server if unspecified projectID v defaultauthmethod gcp projectid string GCP project id Defaults to the project id returned from the operator pod s local metadata server if unspecified params v defaultauthmethod params map Params to use when authenticating to Vault params param something1 foo headers v defaultauthmethod headers map Headers to be included in all Vault requests headers X vault something1 foo vaultAuthGlobalRef v defaultauthmethod vaultauthglobalref VaultAuthGlobalRef enabled v defaultauthmethod vaultauthglobalref enabled boolean false toggles the inclusion of the VaultAuthGlobal configuration in the default VaultAuth CR name v defaultauthmethod vaultauthglobalref name string Name of the VaultAuthGlobal CR to reference namespace v defaultauthmethod vaultauthglobalref namespace string Namespace of the VaultAuthGlobal CR to reference allowDefault v defaultauthmethod vaultauthglobalref allowdefault boolean allow default globals mergeStrategy v defaultauthmethod vaultauthglobalref mergestrategy headers v defaultauthmethod vaultauthglobalref mergestrategy headers string none merge strategy for headers Valid values are replace merge none Default replace params v defaultauthmethod vaultauthglobalref mergestrategy params string none merge strategy for params Valid values are replace merge none Default replace telemetry h telemetry telemetry v telemetry Configures a Prometheus ServiceMonitor serviceMonitor v telemetry servicemonitor enabled v telemetry servicemonitor enabled boolean false The Prometheus operator must be installed before enabling this feature if not the chart will fail to install due to missing CustomResourceDefinitions provided by the operator Instructions on how to install the Helm chart can be found here https github com prometheus community helm charts tree main charts kube prometheus stack More information can be found here https github com prometheus operator prometheus operator https github com prometheus operator kube prometheus Enable deployment of the Vault Secrets Operator ServiceMonitor CustomResource selectors v telemetry servicemonitor selectors string Selector labels to add to the ServiceMonitor When empty defaults to release prometheus scheme v telemetry servicemonitor scheme string https Scheme of the service Prometheus scrapes metrics from This must match the scheme of the metrics service of VSO port v telemetry servicemonitor port string https Port at which Prometheus scrapes metrics This must match the port of the metrics service of VSO path v telemetry servicemonitor path string metrics Path at which Prometheus scrapes metrics bearerTokenFile v telemetry servicemonitor bearertokenfile string var run secrets kubernetes io serviceaccount token File Prometheus reads bearer token from for scraping metrics interval v telemetry servicemonitor interval string 30s Interval at which Prometheus scrapes metrics scrapeTimeout v telemetry servicemonitor scrapetimeout string 10s Timeout for Prometheus scrapes hooks h hooks hooks v hooks Configure the behaviour of Helm hooks resources v hooks resources Resources common to all hooks limits v hooks resources limits cpu v hooks resources limits cpu string 500m memory v hooks resources limits memory string 128Mi requests v hooks resources requests cpu v hooks resources requests cpu string 10m memory v hooks resources requests memory string 64Mi upgradeCRDs v hooks upgradecrds Configure the Helm pre upgrade hook that handles custom resource definition CRD upgrades enabled v hooks upgradecrds enabled boolean true Set to true to automatically upgrade the CRDs Disabling this will require manual intervention to upgrade the CRDs so it is recommended to always leave it enabled backoffLimit v hooks upgradecrds backofflimit integer 5 Limit the number of retries for the CRD upgrade executionTimeout v hooks upgradecrds executiontimeout string 30s Set the timeout for the CRD upgrade The operation should typically take less than 5s to complete tests h tests tests v tests Used by unit tests and will not be rendered except when using helm template this can be safely ignored enabled v tests enabled boolean true codegen end Helm chart examples The below config yaml results in a single replica installation of the Vault Secrets Operator with a default vault connection and auth method custom resource deployed It expects a local Vault installation within the kubernetes cluster accessible via http vault default svc cluster local 8200 with TLS disabled and a Vault Auth Method vault docs auth kubernetes to be setup against the default ServiceAccount yaml config yaml defaultVaultConnection enabled true defaultAuthMethod enabled true Customizing the helm chart If you need to extend the Helm chart with additional options we recommend using a third party tool such as kustomize https github com kubernetes sigs kustomize using the project repo config path in the vault secrets operator https github com hashicorp vault secrets operator project |
vault copied from docs api api reference md in the vault secrets operator repo The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets commit SHA 08a6e5071ffa4faa486bd4b2c53b27585da4680c page title Vault Secrets Operator API Reference layout docs | ---
layout: docs
page_title: Vault Secrets Operator API Reference
description: >-
The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets.
---
<!--
copied from docs/api/api-reference.md in the vault-secrets-operator repo.
commit SHA=08a6e5071ffa4faa486bd4b2c53b27585da4680c
-->
# API Reference
## Packages
- [secrets.hashicorp.com/v1beta1](#secretshashicorpcomv1beta1)
## secrets.hashicorp.com/v1beta1
Package v1beta1 contains API Schema definitions for the secrets v1beta1 API group
### Resource Types
- [HCPAuth](#hcpauth)
- [HCPAuthList](#hcpauthlist)
- [HCPVaultSecretsApp](#hcpvaultsecretsapp)
- [HCPVaultSecretsAppList](#hcpvaultsecretsapplist)
- [SecretTransformation](#secrettransformation)
- [SecretTransformationList](#secrettransformationlist)
- [VaultAuth](#vaultauth)
- [VaultAuthGlobal](#vaultauthglobal)
- [VaultAuthGlobalList](#vaultauthgloballist)
- [VaultAuthList](#vaultauthlist)
- [VaultConnection](#vaultconnection)
- [VaultConnectionList](#vaultconnectionlist)
- [VaultDynamicSecret](#vaultdynamicsecret)
- [VaultDynamicSecretList](#vaultdynamicsecretlist)
- [VaultPKISecret](#vaultpkisecret)
- [VaultPKISecretList](#vaultpkisecretlist)
- [VaultStaticSecret](#vaultstaticsecret)
- [VaultStaticSecretList](#vaultstaticsecretlist)
#### Destination
Destination provides the configuration that will be applied to the
destination Kubernetes Secret during a Vault Secret -> K8s Secret sync.
_Appears in:_
- [HCPVaultSecretsAppSpec](#hcpvaultsecretsappspec)
- [VaultDynamicSecretSpec](#vaultdynamicsecretspec)
- [VaultPKISecretSpec](#vaultpkisecretspec)
- [VaultStaticSecretSpec](#vaultstaticsecretspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `name` _string_ | Name of the Secret | | |
| `create` _boolean_ | Create the destination Secret.<br />If the Secret already exists this should be set to false. | false | |
| `overwrite` _boolean_ | Overwrite the destination Secret if it exists and Create is true. This is<br />useful when migrating to VSO from a previous secret deployment strategy. | false | |
| `labels` _object (keys:string, values:string)_ | Labels to apply to the Secret. Requires Create to be set to true. | | |
| `annotations` _object (keys:string, values:string)_ | Annotations to apply to the Secret. Requires Create to be set to true. | | |
| `type` _[SecretType](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secrettype-v1-core)_ | Type of Kubernetes Secret. Requires Create to be set to true.<br />Defaults to Opaque. | | |
| `transformation` _[Transformation](#transformation)_ | Transformation provides configuration for transforming the secret data before<br />it is stored in the Destination. | | |
#### HCPAuth
HCPAuth is the Schema for the hcpauths API
_Appears in:_
- [HCPAuthList](#hcpauthlist)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `HCPAuth` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[HCPAuthSpec](#hcpauthspec)_ | | | |
#### HCPAuthList
HCPAuthList contains a list of HCPAuth
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `HCPAuthList` | | |
| `metadata` _[ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `items` _[HCPAuth](#hcpauth) array_ | | | |
#### HCPAuthServicePrincipal
HCPAuthServicePrincipal provides HCPAuth configuration options needed for
authenticating to HCP using a service principal configured in SecretRef.
_Appears in:_
- [HCPAuthSpec](#hcpauthspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `secretRef` _string_ | SecretRef is the name of a Kubernetes secret in the consumer's<br />(VDS/VSS/PKI/HCP) namespace which provides the HCP ServicePrincipal clientID,<br />and clientSecret.<br />The secret data must have the following structure {<br /> "clientID": "clientID",<br /> "clientSecret": "clientSecret",<br />} | | |
#### HCPAuthSpec
HCPAuthSpec defines the desired state of HCPAuth
_Appears in:_
- [HCPAuth](#hcpauth)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `organizationID` _string_ | OrganizationID of the HCP organization. | | |
| `projectID` _string_ | ProjectID of the HCP project. | | |
| `allowedNamespaces` _string array_ | AllowedNamespaces Kubernetes Namespaces which are allow-listed for use with this AuthMethod.<br />This field allows administrators to customize which Kubernetes namespaces are authorized to<br />use with this AuthMethod. While Vault will still enforce its own rules, this has the added<br />configurability of restricting which HCPAuthMethods can be used by which namespaces.<br />Accepted values:<br />[]{"*"} - wildcard, all namespaces.<br />[]{"a", "b"} - list of namespaces.<br />unset - disallow all namespaces except the Operator's the HCPAuthMethod's namespace, this<br />is the default behavior. | | |
| `method` _string_ | Method to use when authenticating to Vault. | servicePrincipal | Enum: [servicePrincipal] <br /> |
| `servicePrincipal` _[HCPAuthServicePrincipal](#hcpauthserviceprincipal)_ | ServicePrincipal provides the necessary configuration for authenticating to<br />HCP using a service principal. For security reasons, only project-level<br />service principals should ever be used. | | |
#### HCPVaultSecretsApp
HCPVaultSecretsApp is the Schema for the hcpvaultsecretsapps API
_Appears in:_
- [HCPVaultSecretsAppList](#hcpvaultsecretsapplist)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `HCPVaultSecretsApp` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[HCPVaultSecretsAppSpec](#hcpvaultsecretsappspec)_ | | | |
#### HCPVaultSecretsAppList
HCPVaultSecretsAppList contains a list of HCPVaultSecretsApp
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `HCPVaultSecretsAppList` | | |
| `metadata` _[ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `items` _[HCPVaultSecretsApp](#hcpvaultsecretsapp) array_ | | | |
#### HCPVaultSecretsAppSpec
HCPVaultSecretsAppSpec defines the desired state of HCPVaultSecretsApp
_Appears in:_
- [HCPVaultSecretsApp](#hcpvaultsecretsapp)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `appName` _string_ | AppName of the Vault Secrets Application that is to be synced. | | |
| `hcpAuthRef` _string_ | HCPAuthRef to the HCPAuth resource, can be prefixed with a namespace, eg:<br />`namespaceA/vaultAuthRefB`. If no namespace prefix is provided it will default<br />to the namespace of the HCPAuth CR. If no value is specified for HCPAuthRef the<br />Operator will default to the `default` HCPAuth, configured in the operator's<br />namespace. | | |
| `refreshAfter` _string_ | RefreshAfter a period of time, in duration notation e.g. 30s, 1m, 24h | 600s | Pattern: `^([0-9]+(\.[0-9]+)?(s|m|h))$` <br />Type: string <br /> |
| `rolloutRestartTargets` _[RolloutRestartTarget](#rolloutrestarttarget) array_ | RolloutRestartTargets should be configured whenever the application(s)<br />consuming the HCP Vault Secrets App does not support dynamically reloading a<br />rotated secret. In that case one, or more RolloutRestartTarget(s) can be<br />configured here. The Operator will trigger a "rollout-restart" for each target<br />whenever the Vault secret changes between reconciliation events. See<br />RolloutRestartTarget for more details. | | |
| `destination` _[Destination](#destination)_ | Destination provides configuration necessary for syncing the HCP Vault<br />Application secrets to Kubernetes. | | |
| `syncConfig` _[HVSSyncConfig](#hvssyncconfig)_ | SyncConfig configures sync behavior from HVS to VSO | | |
#### HVSDynamicStatus
HVSDynamicStatus defines the observed state of a dynamic secret within an HCP
Vault Secrets App
_Appears in:_
- [HCPVaultSecretsAppStatus](#hcpvaultsecretsappstatus)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `name` _string_ | Name of the dynamic secret | | |
| `createdAt` _string_ | CreatedAt is the timestamp string of when the dynamic secret was created | | |
| `expiresAt` _string_ | ExpiresAt is the timestamp string of when the dynamic secret will expire | | |
| `ttl` _string_ | TTL is the time-to-live of the dynamic secret in seconds | | |
#### HVSDynamicSyncConfig
HVSDynamicSyncConfig configures sync behavior for HVS dynamic secrets.
_Appears in:_
- [HVSSyncConfig](#hvssyncconfig)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `renewalPercent` _integer_ | RenewalPercent is the percent out of 100 of a dynamic secret's TTL when<br />new secrets are generated. Defaults to 67 percent plus up to 10% jitter. | 67 | Maximum: 90 <br />Minimum: 0 <br /> |
#### HVSSyncConfig
HVSSyncConfig configures sync behavior from HVS to VSO
_Appears in:_
- [HCPVaultSecretsAppSpec](#hcpvaultsecretsappspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `dynamic` _[HVSDynamicSyncConfig](#hvsdynamicsyncconfig)_ | Dynamic configures sync behavior for dynamic secrets. | | |
#### MergeStrategy
MergeStrategy provides the configuration for merging HTTP headers and
parameters from the referring VaultAuth resource and its VaultAuthGlobal
resource.
_Appears in:_
- [VaultAuthGlobalRef](#vaultauthglobalref)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `headers` _string_ | Headers configures the merge strategy for HTTP headers that are included in<br />all Vault requests. Choices are `union`, `replace`, or `none`.<br /><br />If `union` is set, the headers from the VaultAuthGlobal and VaultAuth<br />resources are merged. The headers from the VaultAuth always take precedence.<br /><br />If `replace` is set, the first set of non-empty headers taken in order from:<br />VaultAuth, VaultAuthGlobal auth method, VaultGlobal default headers.<br /><br />If `none` is set, the headers from the<br />VaultAuthGlobal resource are ignored and only the headers from the VaultAuth<br />resource are used. The default is `none`. | | Enum: [union replace none] <br /> |
| `params` _string_ | Params configures the merge strategy for HTTP parameters that are included in<br />all Vault requests. Choices are `union`, `replace`, or `none`.<br /><br />If `union` is set, the parameters from the VaultAuthGlobal and VaultAuth<br />resources are merged. The parameters from the VaultAuth always take<br />precedence.<br /><br />If `replace` is set, the first set of non-empty parameters taken in order from:<br />VaultAuth, VaultAuthGlobal auth method, VaultGlobal default parameters.<br /><br />If `none` is set, the parameters from the VaultAuthGlobal resource are ignored<br />and only the parameters from the VaultAuth resource are used. The default is<br />`none`. | | Enum: [union replace none] <br /> |
#### RolloutRestartTarget
RolloutRestartTarget provides the configuration required to perform a
rollout-restart of the supported resources upon Vault Secret rotation.
The rollout-restart is triggered by patching the target resource's
'spec.template.metadata.annotations' to include 'vso.secrets.hashicorp.com/restartedAt'
with a timestamp value of when the trigger was executed.
E.g. vso.secrets.hashicorp.com/restartedAt: "2023-03-23T13:39:31Z"
Supported resources: Deployment, DaemonSet, StatefulSet, argo.Rollout
_Appears in:_
- [HCPVaultSecretsAppSpec](#hcpvaultsecretsappspec)
- [VaultDynamicSecretSpec](#vaultdynamicsecretspec)
- [VaultPKISecretSpec](#vaultpkisecretspec)
- [VaultStaticSecretSpec](#vaultstaticsecretspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `kind` _string_ | Kind of the resource | | Enum: [Deployment DaemonSet StatefulSet argo.Rollout] <br /> |
| `name` _string_ | Name of the resource | | |
#### SecretTransformation
SecretTransformation is the Schema for the secrettransformations API
_Appears in:_
- [SecretTransformationList](#secrettransformationlist)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `SecretTransformation` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[SecretTransformationSpec](#secrettransformationspec)_ | | | |
#### SecretTransformationList
SecretTransformationList contains a list of SecretTransformation
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `SecretTransformationList` | | |
| `metadata` _[ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `items` _[SecretTransformation](#secrettransformation) array_ | | | |
#### SecretTransformationSpec
SecretTransformationSpec defines the desired state of SecretTransformation
_Appears in:_
- [SecretTransformation](#secrettransformation)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `templates` _object (keys:string, values:[Template](#template))_ | Templates maps a template name to its Template. Templates are always included<br />in the rendered K8s Secret with the specified key. | | |
| `sourceTemplates` _[SourceTemplate](#sourcetemplate) array_ | SourceTemplates are never included in the rendered K8s Secret, they can be<br />used to provide common template definitions, etc. | | |
| `includes` _string array_ | Includes contains regex patterns used to filter top-level source secret data<br />fields for inclusion in the final K8s Secret data. These pattern filters are<br />never applied to templated fields as defined in Templates. They are always<br />applied last. | | |
| `excludes` _string array_ | Excludes contains regex patterns used to filter top-level source secret data<br />fields for exclusion from the final K8s Secret data. These pattern filters are<br />never applied to templated fields as defined in Templates. They are always<br />applied before any inclusion patterns. To exclude all source secret data<br />fields, you can configure the single pattern ".*". | | |
#### SourceTemplate
SourceTemplate provides source templating configuration.
_Appears in:_
- [SecretTransformationSpec](#secrettransformationspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `name` _string_ | | | |
| `text` _string_ | Text contains the Go text template format. The template<br />references attributes from the data structure of the source secret.<br />Refer to https://pkg.go.dev/text/template for more information. | | |
#### StorageEncryption
StorageEncryption provides the necessary configuration need to encrypt the storage cache
entries using Vault's Transit engine.
_Appears in:_
- [VaultAuthSpec](#vaultauthspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `mount` _string_ | Mount path of the Transit engine in Vault. | | |
| `keyName` _string_ | KeyName to use for encrypt/decrypt operations via Vault Transit. | | |
#### SyncConfig
SyncConfig configures sync behavior from Vault to VSO
_Appears in:_
- [VaultStaticSecretSpec](#vaultstaticsecretspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `instantUpdates` _boolean_ | InstantUpdates is a flag to indicate that event-driven updates are<br />enabled for this VaultStaticSecret | | |
#### Template
Template provides templating configuration.
_Appears in:_
- [SecretTransformationSpec](#secrettransformationspec)
- [Transformation](#transformation)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `name` _string_ | Name of the Template | | |
| `text` _string_ | Text contains the Go text template format. The template<br />references attributes from the data structure of the source secret.<br />Refer to https://pkg.go.dev/text/template for more information. | | |
#### TemplateRef
TemplateRef points to templating text that is stored in a
SecretTransformation custom resource.
_Appears in:_
- [TransformationRef](#transformationref)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `name` _string_ | Name of the Template in SecretTransformationSpec.Templates.<br />the rendered secret data. | | |
| `keyOverride` _string_ | KeyOverride to the rendered template in the Destination secret. If Key is<br />empty, then the Key from reference spec will be used. Set this to override the<br />Key set from the reference spec. | | |
#### Transformation
_Appears in:_
- [Destination](#destination)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `templates` _object (keys:string, values:[Template](#template))_ | Templates maps a template name to its Template. Templates are always included<br />in the rendered K8s Secret, and take precedence over templates defined in a<br />SecretTransformation. | | |
| `transformationRefs` _[TransformationRef](#transformationref) array_ | TransformationRefs contain references to template configuration from<br />SecretTransformation. | | |
| `includes` _string array_ | Includes contains regex patterns used to filter top-level source secret data<br />fields for inclusion in the final K8s Secret data. These pattern filters are<br />never applied to templated fields as defined in Templates. They are always<br />applied last. | | |
| `excludes` _string array_ | Excludes contains regex patterns used to filter top-level source secret data<br />fields for exclusion from the final K8s Secret data. These pattern filters are<br />never applied to templated fields as defined in Templates. They are always<br />applied before any inclusion patterns. To exclude all source secret data<br />fields, you can configure the single pattern ".*". | | |
| `excludeRaw` _boolean_ | ExcludeRaw data from the destination Secret. Exclusion policy can be set<br />globally by including 'exclude-raw` in the '--global-transformation-options'<br />command line flag. If set, the command line flag always takes precedence over<br />this configuration. | | |
#### TransformationRef
TransformationRef contains the configuration for accessing templates from an
SecretTransformation resource. TransformationRefs can be shared across all
syncable secret custom resources.
_Appears in:_
- [Transformation](#transformation)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `namespace` _string_ | Namespace of the SecretTransformation resource. | | |
| `name` _string_ | Name of the SecretTransformation resource. | | |
| `templateRefs` _[TemplateRef](#templateref) array_ | TemplateRefs map to a Template found in this TransformationRef. If empty, then<br />all templates from the SecretTransformation will be rendered to the K8s Secret. | | |
| `ignoreIncludes` _boolean_ | IgnoreIncludes controls whether to use the SecretTransformation's Includes<br />data key filters. | | |
| `ignoreExcludes` _boolean_ | IgnoreExcludes controls whether to use the SecretTransformation's Excludes<br />data key filters. | | |
#### VaultAuth
VaultAuth is the Schema for the vaultauths API
_Appears in:_
- [VaultAuthList](#vaultauthlist)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultAuth` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[VaultAuthSpec](#vaultauthspec)_ | | | |
#### VaultAuthConfigAWS
VaultAuthConfigAWS provides VaultAuth configuration options needed for
authenticating to Vault via an AWS AuthMethod. Will use creds from
`SecretRef` or `IRSAServiceAccount` if provided, in that order. If neither
are provided, the underlying node role or instance profile will be used to
authenticate to Vault.
_Appears in:_
- [VaultAuthGlobalConfigAWS](#vaultauthglobalconfigaws)
- [VaultAuthSpec](#vaultauthspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `role` _string_ | Vault role to use for authenticating | | |
| `region` _string_ | AWS Region to use for signing the authentication request | | |
| `headerValue` _string_ | The Vault header value to include in the STS signing request | | |
| `sessionName` _string_ | The role session name to use when creating a webidentity provider | | |
| `stsEndpoint` _string_ | The STS endpoint to use; if not set will use the default | | |
| `iamEndpoint` _string_ | The IAM endpoint to use; if not set will use the default | | |
| `secretRef` _string_ | SecretRef is the name of a Kubernetes Secret in the consumer's (VDS/VSS/PKI) namespace<br />which holds credentials for AWS. Expected keys include `access_key_id`, `secret_access_key`,<br />`session_token` | | |
| `irsaServiceAccount` _string_ | IRSAServiceAccount name to use with IAM Roles for Service Accounts<br />(IRSA), and should be annotated with "eks.amazonaws.com/role-arn". This<br />ServiceAccount will be checked for other EKS annotations:<br />eks.amazonaws.com/audience and eks.amazonaws.com/token-expiration | | |
#### VaultAuthConfigAppRole
VaultAuthConfigAppRole provides VaultAuth configuration options needed for authenticating to
Vault via an AppRole AuthMethod.
_Appears in:_
- [VaultAuthGlobalConfigAppRole](#vaultauthglobalconfigapprole)
- [VaultAuthSpec](#vaultauthspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `roleId` _string_ | RoleID of the AppRole Role to use for authenticating to Vault. | | |
| `secretRef` _string_ | SecretRef is the name of a Kubernetes secret in the consumer's (VDS/VSS/PKI) namespace which<br />provides the AppRole Role's SecretID. The secret must have a key named `id` which holds the<br />AppRole Role's secretID. | | |
#### VaultAuthConfigGCP
VaultAuthConfigGCP provides VaultAuth configuration options needed for
authenticating to Vault via a GCP AuthMethod, using workload identity
_Appears in:_
- [VaultAuthGlobalConfigGCP](#vaultauthglobalconfiggcp)
- [VaultAuthSpec](#vaultauthspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `role` _string_ | Vault role to use for authenticating | | |
| `workloadIdentityServiceAccount` _string_ | WorkloadIdentityServiceAccount is the name of a Kubernetes service<br />account (in the same Kubernetes namespace as the Vault*Secret referencing<br />this resource) which has been configured for workload identity in GKE.<br />Should be annotated with "iam.gke.io/gcp-service-account". | | |
| `region` _string_ | GCP Region of the GKE cluster's identity provider. Defaults to the region<br />returned from the operator pod's local metadata server. | | |
| `clusterName` _string_ | GKE cluster name. Defaults to the cluster-name returned from the operator<br />pod's local metadata server. | | |
| `projectID` _string_ | GCP project ID. Defaults to the project-id returned from the operator<br />pod's local metadata server. | | |
#### VaultAuthConfigJWT
VaultAuthConfigJWT provides VaultAuth configuration options needed for authenticating to Vault.
_Appears in:_
- [VaultAuthGlobalConfigJWT](#vaultauthglobalconfigjwt)
- [VaultAuthSpec](#vaultauthspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `role` _string_ | Role to use for authenticating to Vault. | | |
| `secretRef` _string_ | SecretRef is the name of a Kubernetes secret in the consumer's (VDS/VSS/PKI) namespace which<br />provides the JWT token to authenticate to Vault's JWT authentication backend. The secret must<br />have a key named `jwt` which holds the JWT token. | | |
| `serviceAccount` _string_ | ServiceAccount to use when creating a ServiceAccount token to authenticate to Vault's<br />JWT authentication backend. | | |
| `audiences` _string array_ | TokenAudiences to include in the ServiceAccount token. | | |
| `tokenExpirationSeconds` _integer_ | TokenExpirationSeconds to set the ServiceAccount token. | 600 | Minimum: 600 <br /> |
#### VaultAuthConfigKubernetes
VaultAuthConfigKubernetes provides VaultAuth configuration options needed for authenticating to Vault.
_Appears in:_
- [VaultAuthGlobalConfigKubernetes](#vaultauthglobalconfigkubernetes)
- [VaultAuthSpec](#vaultauthspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `role` _string_ | Role to use for authenticating to Vault. | | |
| `serviceAccount` _string_ | ServiceAccount to use when authenticating to Vault's<br />authentication backend. This must reside in the consuming secret's (VDS/VSS/PKI) namespace. | | |
| `audiences` _string array_ | TokenAudiences to include in the ServiceAccount token. | | |
| `tokenExpirationSeconds` _integer_ | TokenExpirationSeconds to set the ServiceAccount token. | 600 | Minimum: 600 <br /> |
#### VaultAuthGlobal
VaultAuthGlobal is the Schema for the vaultauthglobals API
_Appears in:_
- [VaultAuthGlobalList](#vaultauthgloballist)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultAuthGlobal` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[VaultAuthGlobalSpec](#vaultauthglobalspec)_ | | | |
#### VaultAuthGlobalConfigAWS
_Appears in:_
- [VaultAuthGlobalSpec](#vaultauthglobalspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `role` _string_ | Vault role to use for authenticating | | |
| `region` _string_ | AWS Region to use for signing the authentication request | | |
| `headerValue` _string_ | The Vault header value to include in the STS signing request | | |
| `sessionName` _string_ | The role session name to use when creating a webidentity provider | | |
| `stsEndpoint` _string_ | The STS endpoint to use; if not set will use the default | | |
| `iamEndpoint` _string_ | The IAM endpoint to use; if not set will use the default | | |
| `secretRef` _string_ | SecretRef is the name of a Kubernetes Secret in the consumer's (VDS/VSS/PKI) namespace<br />which holds credentials for AWS. Expected keys include `access_key_id`, `secret_access_key`,<br />`session_token` | | |
| `irsaServiceAccount` _string_ | IRSAServiceAccount name to use with IAM Roles for Service Accounts<br />(IRSA), and should be annotated with "eks.amazonaws.com/role-arn". This<br />ServiceAccount will be checked for other EKS annotations:<br />eks.amazonaws.com/audience and eks.amazonaws.com/token-expiration | | |
| `namespace` _string_ | Namespace to auth to in Vault | | |
| `mount` _string_ | Mount to use when authenticating to auth method. | | |
| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault | | |
| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. | | |
#### VaultAuthGlobalConfigAppRole
_Appears in:_
- [VaultAuthGlobalSpec](#vaultauthglobalspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `roleId` _string_ | RoleID of the AppRole Role to use for authenticating to Vault. | | |
| `secretRef` _string_ | SecretRef is the name of a Kubernetes secret in the consumer's (VDS/VSS/PKI) namespace which<br />provides the AppRole Role's SecretID. The secret must have a key named `id` which holds the<br />AppRole Role's secretID. | | |
| `namespace` _string_ | Namespace to auth to in Vault | | |
| `mount` _string_ | Mount to use when authenticating to auth method. | | |
| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault | | |
| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. | | |
#### VaultAuthGlobalConfigGCP
_Appears in:_
- [VaultAuthGlobalSpec](#vaultauthglobalspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `role` _string_ | Vault role to use for authenticating | | |
| `workloadIdentityServiceAccount` _string_ | WorkloadIdentityServiceAccount is the name of a Kubernetes service<br />account (in the same Kubernetes namespace as the Vault*Secret referencing<br />this resource) which has been configured for workload identity in GKE.<br />Should be annotated with "iam.gke.io/gcp-service-account". | | |
| `region` _string_ | GCP Region of the GKE cluster's identity provider. Defaults to the region<br />returned from the operator pod's local metadata server. | | |
| `clusterName` _string_ | GKE cluster name. Defaults to the cluster-name returned from the operator<br />pod's local metadata server. | | |
| `projectID` _string_ | GCP project ID. Defaults to the project-id returned from the operator<br />pod's local metadata server. | | |
| `namespace` _string_ | Namespace to auth to in Vault | | |
| `mount` _string_ | Mount to use when authenticating to auth method. | | |
| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault | | |
| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. | | |
#### VaultAuthGlobalConfigJWT
_Appears in:_
- [VaultAuthGlobalSpec](#vaultauthglobalspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `role` _string_ | Role to use for authenticating to Vault. | | |
| `secretRef` _string_ | SecretRef is the name of a Kubernetes secret in the consumer's (VDS/VSS/PKI) namespace which<br />provides the JWT token to authenticate to Vault's JWT authentication backend. The secret must<br />have a key named `jwt` which holds the JWT token. | | |
| `serviceAccount` _string_ | ServiceAccount to use when creating a ServiceAccount token to authenticate to Vault's<br />JWT authentication backend. | | |
| `audiences` _string array_ | TokenAudiences to include in the ServiceAccount token. | | |
| `tokenExpirationSeconds` _integer_ | TokenExpirationSeconds to set the ServiceAccount token. | 600 | Minimum: 600 <br /> |
| `namespace` _string_ | Namespace to auth to in Vault | | |
| `mount` _string_ | Mount to use when authenticating to auth method. | | |
| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault | | |
| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. | | |
#### VaultAuthGlobalConfigKubernetes
_Appears in:_
- [VaultAuthGlobalSpec](#vaultauthglobalspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `role` _string_ | Role to use for authenticating to Vault. | | |
| `serviceAccount` _string_ | ServiceAccount to use when authenticating to Vault's<br />authentication backend. This must reside in the consuming secret's (VDS/VSS/PKI) namespace. | | |
| `audiences` _string array_ | TokenAudiences to include in the ServiceAccount token. | | |
| `tokenExpirationSeconds` _integer_ | TokenExpirationSeconds to set the ServiceAccount token. | 600 | Minimum: 600 <br /> |
| `namespace` _string_ | Namespace to auth to in Vault | | |
| `mount` _string_ | Mount to use when authenticating to auth method. | | |
| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault | | |
| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. | | |
#### VaultAuthGlobalList
VaultAuthGlobalList contains a list of VaultAuthGlobal
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultAuthGlobalList` | | |
| `metadata` _[ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `items` _[VaultAuthGlobal](#vaultauthglobal) array_ | | | |
#### VaultAuthGlobalRef
VaultAuthGlobalRef is a reference to a VaultAuthGlobal resource. A referring
VaultAuth resource can use the VaultAuthGlobal resource to share common
configuration across multiple VaultAuth resources. The VaultAuthGlobal
resource is used to store global configuration for VaultAuth resources.
_Appears in:_
- [VaultAuthSpec](#vaultauthspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `name` _string_ | Name of the VaultAuthGlobal resource. | | Pattern: `^([a-z0-9.-]{1,253})$` <br /> |
| `namespace` _string_ | Namespace of the VaultAuthGlobal resource. If not provided, the namespace of<br />the referring VaultAuth resource is used. | | Pattern: `^([a-z0-9.-]{1,253})$` <br /> |
| `mergeStrategy` _[MergeStrategy](#mergestrategy)_ | MergeStrategy configures the merge strategy for HTTP headers and parameters<br />that are included in all Vault authentication requests. | | |
| `allowDefault` _boolean_ | AllowDefault when set to true will use the default VaultAuthGlobal resource<br />as the default if Name is not set. The 'allow-default-globals' option must be<br />set on the operator's '-global-vault-auth-options' flag<br /><br />The default VaultAuthGlobal search is conditional.<br />When a ref Namespace is set, the search for the default<br />VaultAuthGlobal resource is constrained to that namespace.<br />Otherwise, the search order is:<br />1. The default VaultAuthGlobal resource in the referring VaultAuth resource's<br />namespace.<br />2. The default VaultAuthGlobal resource in the Operator's namespace. | | |
#### VaultAuthGlobalSpec
VaultAuthGlobalSpec defines the desired state of VaultAuthGlobal
_Appears in:_
- [VaultAuthGlobal](#vaultauthglobal)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `allowedNamespaces` _string array_ | AllowedNamespaces Kubernetes Namespaces which are allow-listed for use with<br />this VaultAuthGlobal. This field allows administrators to customize which<br />Kubernetes namespaces are authorized to reference this resource. While Vault<br />will still enforce its own rules, this has the added configurability of<br />restricting which VaultAuthMethods can be used by which namespaces. Accepted<br />values: []{"*"} - wildcard, all namespaces. []{"a", "b"} - list of namespaces.<br />unset - disallow all namespaces except the Operator's and the referring<br />VaultAuthMethod's namespace, this is the default behavior. | | |
| `vaultConnectionRef` _string_ | VaultConnectionRef to the VaultConnection resource, can be prefixed with a namespace,<br />eg: `namespaceA/vaultConnectionRefB`. If no namespace prefix is provided it will default to<br />the namespace of the VaultConnection CR. If no value is specified for VaultConnectionRef the<br />Operator will default to the `default` VaultConnection, configured in the operator's namespace. | | |
| `defaultVaultNamespace` _string_ | DefaultVaultNamespace to auth to in Vault, if not specified the namespace of the auth<br />method will be used. This can be used as a default Vault namespace for all<br />auth methods. | | |
| `defaultAuthMethod` _string_ | DefaultAuthMethod to use when authenticating to Vault. | | Enum: [kubernetes jwt appRole aws gcp] <br /> |
| `defaultMount` _string_ | DefaultMount to use when authenticating to auth method. If not specified the mount of<br />the auth method configured in Vault will be used. | | |
| `params` _object (keys:string, values:string)_ | DefaultParams to use when authenticating to Vault | | |
| `headers` _object (keys:string, values:string)_ | DefaultHeaders to be included in all Vault requests. | | |
| `kubernetes` _[VaultAuthGlobalConfigKubernetes](#vaultauthglobalconfigkubernetes)_ | Kubernetes specific auth configuration, requires that the Method be set to `kubernetes`. | | |
| `appRole` _[VaultAuthGlobalConfigAppRole](#vaultauthglobalconfigapprole)_ | AppRole specific auth configuration, requires that the Method be set to `appRole`. | | |
| `jwt` _[VaultAuthGlobalConfigJWT](#vaultauthglobalconfigjwt)_ | JWT specific auth configuration, requires that the Method be set to `jwt`. | | |
| `aws` _[VaultAuthGlobalConfigAWS](#vaultauthglobalconfigaws)_ | AWS specific auth configuration, requires that Method be set to `aws`. | | |
| `gcp` _[VaultAuthGlobalConfigGCP](#vaultauthglobalconfiggcp)_ | GCP specific auth configuration, requires that Method be set to `gcp`. | | |
#### VaultAuthList
VaultAuthList contains a list of VaultAuth
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultAuthList` | | |
| `metadata` _[ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `items` _[VaultAuth](#vaultauth) array_ | | | |
#### VaultAuthSpec
VaultAuthSpec defines the desired state of VaultAuth
_Appears in:_
- [VaultAuth](#vaultauth)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `vaultConnectionRef` _string_ | VaultConnectionRef to the VaultConnection resource, can be prefixed with a namespace,<br />eg: `namespaceA/vaultConnectionRefB`. If no namespace prefix is provided it will default to<br />the namespace of the VaultConnection CR. If no value is specified for VaultConnectionRef the<br />Operator will default to the `default` VaultConnection, configured in the operator's namespace. | | |
| `vaultAuthGlobalRef` _[VaultAuthGlobalRef](#vaultauthglobalref)_ | VaultAuthGlobalRef. | | |
| `namespace` _string_ | Namespace to auth to in Vault | | |
| `allowedNamespaces` _string array_ | AllowedNamespaces Kubernetes Namespaces which are allow-listed for use with this AuthMethod.<br />This field allows administrators to customize which Kubernetes namespaces are authorized to<br />use with this AuthMethod. While Vault will still enforce its own rules, this has the added<br />configurability of restricting which VaultAuthMethods can be used by which namespaces.<br />Accepted values:<br />[]{"*"} - wildcard, all namespaces.<br />[]{"a", "b"} - list of namespaces.<br />unset - disallow all namespaces except the Operator's the VaultAuthMethod's namespace, this<br />is the default behavior. | | |
| `method` _string_ | Method to use when authenticating to Vault. | | Enum: [kubernetes jwt appRole aws gcp] <br /> |
| `mount` _string_ | Mount to use when authenticating to auth method. | | |
| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault | | |
| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. | | |
| `kubernetes` _[VaultAuthConfigKubernetes](#vaultauthconfigkubernetes)_ | Kubernetes specific auth configuration, requires that the Method be set to `kubernetes`. | | |
| `appRole` _[VaultAuthConfigAppRole](#vaultauthconfigapprole)_ | AppRole specific auth configuration, requires that the Method be set to `appRole`. | | |
| `jwt` _[VaultAuthConfigJWT](#vaultauthconfigjwt)_ | JWT specific auth configuration, requires that the Method be set to `jwt`. | | |
| `aws` _[VaultAuthConfigAWS](#vaultauthconfigaws)_ | AWS specific auth configuration, requires that Method be set to `aws`. | | |
| `gcp` _[VaultAuthConfigGCP](#vaultauthconfiggcp)_ | GCP specific auth configuration, requires that Method be set to `gcp`. | | |
| `storageEncryption` _[StorageEncryption](#storageencryption)_ | StorageEncryption provides the necessary configuration to encrypt the client storage cache.<br />This should only be configured when client cache persistence with encryption is enabled.<br />This is done by passing setting the manager's commandline argument<br />--client-cache-persistence-model=direct-encrypted. Typically, there should only ever<br />be one VaultAuth configured with StorageEncryption in the Cluster, and it should have<br />the label: cacheStorageEncryption=true | | |
#### VaultClientMeta
VaultClientMeta defines the observed state of the last Vault Client used to
sync the secret. This status is used during resource reconciliation.
_Appears in:_
- [VaultDynamicSecretStatus](#vaultdynamicsecretstatus)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `cacheKey` _string_ | CacheKey is the unique key used to identify the client cache. | | |
| `id` _string_ | ID is the Vault ID of the authenticated client. The ID should never contain<br />any sensitive information. | | |
#### VaultConnection
VaultConnection is the Schema for the vaultconnections API
_Appears in:_
- [VaultConnectionList](#vaultconnectionlist)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultConnection` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[VaultConnectionSpec](#vaultconnectionspec)_ | | | |
#### VaultConnectionList
VaultConnectionList contains a list of VaultConnection
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultConnectionList` | | |
| `metadata` _[ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `items` _[VaultConnection](#vaultconnection) array_ | | | |
#### VaultConnectionSpec
VaultConnectionSpec defines the desired state of VaultConnection
_Appears in:_
- [VaultConnection](#vaultconnection)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `address` _string_ | Address of the Vault server | | |
| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. | | |
| `tlsServerName` _string_ | TLSServerName to use as the SNI host for TLS connections. | | |
| `caCertSecretRef` _string_ | CACertSecretRef is the name of a Kubernetes secret containing the trusted PEM encoded CA certificate chain as `ca.crt`. | | |
| `skipTLSVerify` _boolean_ | SkipTLSVerify for TLS connections. | false | |
| `timeout` _string_ | Timeout applied to all Vault requests for this connection. If not set, the<br />default timeout from the Vault API client config is used. | | Pattern: `^([0-9]+(\.[0-9]+)?(s|m|h))$` <br />Type: string <br /> |
#### VaultDynamicSecret
VaultDynamicSecret is the Schema for the vaultdynamicsecrets API
_Appears in:_
- [VaultDynamicSecretList](#vaultdynamicsecretlist)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultDynamicSecret` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[VaultDynamicSecretSpec](#vaultdynamicsecretspec)_ | | | |
#### VaultDynamicSecretList
VaultDynamicSecretList contains a list of VaultDynamicSecret
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultDynamicSecretList` | | |
| `metadata` _[ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `items` _[VaultDynamicSecret](#vaultdynamicsecret) array_ | | | |
#### VaultDynamicSecretSpec
VaultDynamicSecretSpec defines the desired state of VaultDynamicSecret
_Appears in:_
- [VaultDynamicSecret](#vaultdynamicsecret)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `vaultAuthRef` _string_ | VaultAuthRef to the VaultAuth resource, can be prefixed with a namespace,<br />eg: `namespaceA/vaultAuthRefB`. If no namespace prefix is provided it will default to<br />the namespace of the VaultAuth CR. If no value is specified for VaultAuthRef the Operator<br />will default to the `default` VaultAuth, configured in the operator's namespace. | | |
| `namespace` _string_ | Namespace of the secrets engine mount in Vault. If not set, the namespace that's<br />part of VaultAuth resource will be inferred. | | |
| `mount` _string_ | Mount path of the secret's engine in Vault. | | |
| `requestHTTPMethod` _string_ | RequestHTTPMethod to use when syncing Secrets from Vault.<br />Setting a value here is not typically required.<br />If left unset the Operator will make requests using the GET method.<br />In the case where Params are specified the Operator will use the PUT method.<br />Please consult [secrets](/vault/docs/secrets) if you are<br />uncertain about what method to use.<br />Of note, the Vault client treats PUT and POST as being equivalent.<br />The underlying Vault client implementation will always use the PUT method. | | Enum: [GET POST PUT] <br /> |
| `path` _string_ | Path in Vault to get the credentials for, and is relative to Mount.<br />Please consult [secrets](/vault/docs/secrets) if you are<br />uncertain about what 'path' should be set to. | | |
| `params` _object (keys:string, values:string)_ | Params that can be passed when requesting credentials/secrets.<br />When Params is set the configured RequestHTTPMethod will be<br />ignored. See RequestHTTPMethod for more details.<br />Please consult [secrets](/vault/docs/secrets) if you are<br />uncertain about what 'params' should/can be set to. | | |
| `renewalPercent` _integer_ | RenewalPercent is the percent out of 100 of the lease duration when the<br />lease is renewed. Defaults to 67 percent plus jitter. | 67 | Maximum: 90 <br />Minimum: 0 <br /> |
| `revoke` _boolean_ | Revoke the existing lease on VDS resource deletion. | | |
| `allowStaticCreds` _boolean_ | AllowStaticCreds should be set when syncing credentials that are periodically<br />rotated by the Vault server, rather than created upon request. These secrets<br />are sometimes referred to as "static roles", or "static credentials", with a<br />request path that contains "static-creds". | | |
| `rolloutRestartTargets` _[RolloutRestartTarget](#rolloutrestarttarget) array_ | RolloutRestartTargets should be configured whenever the application(s) consuming the Vault secret does<br />not support dynamically reloading a rotated secret.<br />In that case one, or more RolloutRestartTarget(s) can be configured here. The Operator will<br />trigger a "rollout-restart" for each target whenever the Vault secret changes between reconciliation events.<br />See RolloutRestartTarget for more details. | | |
| `destination` _[Destination](#destination)_ | Destination provides configuration necessary for syncing the Vault secret to Kubernetes. | | |
| `refreshAfter` _string_ | RefreshAfter a period of time for VSO to sync the source secret data, in<br />duration notation e.g. 30s, 1m, 24h. This value only needs to be set when<br />syncing from a secret's engine that does not provide a lease TTL in its<br />response. The value should be within the secret engine's configured ttl or<br />max_ttl. The source secret's lease duration takes precedence over this<br />configuration when it is greater than 0. | | Pattern: `^([0-9]+(\.[0-9]+)?(s|m|h))$` <br />Type: string <br /> |
#### VaultPKISecret
VaultPKISecret is the Schema for the vaultpkisecrets API
_Appears in:_
- [VaultPKISecretList](#vaultpkisecretlist)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultPKISecret` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[VaultPKISecretSpec](#vaultpkisecretspec)_ | | | |
#### VaultPKISecretList
VaultPKISecretList contains a list of VaultPKISecret
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultPKISecretList` | | |
| `metadata` _[ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `items` _[VaultPKISecret](#vaultpkisecret) array_ | | | |
#### VaultPKISecretSpec
VaultPKISecretSpec defines the desired state of VaultPKISecret
_Appears in:_
- [VaultPKISecret](#vaultpkisecret)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `vaultAuthRef` _string_ | VaultAuthRef to the VaultAuth resource, can be prefixed with a namespace,<br />eg: `namespaceA/vaultAuthRefB`. If no namespace prefix is provided it will default to<br />the namespace of the VaultAuth CR. If no value is specified for VaultAuthRef the Operator<br />will default to the `default` VaultAuth, configured in the operator's namespace. | | |
| `namespace` _string_ | Namespace of the secrets engine mount in Vault. If not set, the namespace that's<br />part of VaultAuth resource will be inferred. | | |
| `mount` _string_ | Mount for the secret in Vault | | |
| `role` _string_ | Role in Vault to use when issuing TLS certificates. | | |
| `revoke` _boolean_ | Revoke the certificate when the resource is deleted. | | |
| `clear` _boolean_ | Clear the Kubernetes secret when the resource is deleted. | | |
| `expiryOffset` _string_ | ExpiryOffset to use for computing when the certificate should be renewed.<br />The rotation time will be difference between the expiration and the offset.<br />Should be in duration notation e.g. 30s, 120s, etc. | | Pattern: `^([0-9]+(\.[0-9]+)?(s|m|h))$` <br />Type: string <br /> |
| `issuerRef` _string_ | IssuerRef reference to an existing PKI issuer, either by Vault-generated<br />identifier, the literal string default to refer to the currently<br />configured default issuer, or the name assigned to an issuer.<br />This parameter is part of the request URL. | | |
| `rolloutRestartTargets` _[RolloutRestartTarget](#rolloutrestarttarget) array_ | RolloutRestartTargets should be configured whenever the application(s) consuming the Vault secret does<br />not support dynamically reloading a rotated secret.<br />In that case one, or more RolloutRestartTarget(s) can be configured here. The Operator will<br />trigger a "rollout-restart" for each target whenever the Vault secret changes between reconciliation events.<br />See RolloutRestartTarget for more details. | | |
| `destination` _[Destination](#destination)_ | Destination provides configuration necessary for syncing the Vault secret<br />to Kubernetes. If the type is set to "kubernetes.io/tls", "tls.key" will<br />be set to the "private_key" response from Vault, and "tls.crt" will be<br />set to "certificate" + "ca_chain" from the Vault response ("issuing_ca"<br />is used when "ca_chain" is empty). The "remove_roots_from_chain=true"<br />option is used with Vault to exclude the root CA from the Vault response. | | |
| `commonName` _string_ | CommonName to include in the request. | | |
| `altNames` _string array_ | AltNames to include in the request<br />May contain both DNS names and email addresses. | | |
| `ipSans` _string array_ | IPSans to include in the request. | | |
| `uriSans` _string array_ | The requested URI SANs. | | |
| `otherSans` _string array_ | Requested other SANs, in an array with the format<br />oid;type:value for each entry. | | |
| `userIDs` _string array_ | User ID (OID 0.9.2342.19200300.100.1.1) Subject values to be placed on the<br />signed certificate. | | |
| `ttl` _string_ | TTL for the certificate; sets the expiration date.<br />If not specified the Vault role's default,<br />backend default, or system default TTL is used, in that order.<br />Cannot be larger than the mount's max TTL.<br />Note: this only has an effect when generating a CA cert or signing a CA cert,<br />not when generating a CSR for an intermediate CA.<br />Should be in duration notation e.g. 120s, 2h, etc. | | Pattern: `^([0-9]+(\.[0-9]+)?(s|m|h))$` <br />Type: string <br /> |
| `format` _string_ | Format for the certificate. Choices: "pem", "der", "pem_bundle".<br />If "pem_bundle",<br />any private key and issuing cert will be appended to the certificate pem.<br />If "der", the value will be base64 encoded.<br />Default: pem | | |
| `privateKeyFormat` _string_ | PrivateKeyFormat, generally the default will be controlled by the Format<br />parameter as either base64-encoded DER or PEM-encoded DER.<br />However, this can be set to "pkcs8" to have the returned<br />private key contain base64-encoded pkcs8 or PEM-encoded<br />pkcs8 instead.<br />Default: der | | |
| `notAfter` _string_ | NotAfter field of the certificate with specified date value.<br />The value format should be given in UTC format YYYY-MM-ddTHH:MM:SSZ | | |
| `excludeCNFromSans` _boolean_ | ExcludeCNFromSans from DNS or Email Subject Alternate Names.<br />Default: false | | |
#### VaultSecretLease
_Appears in:_
- [VaultDynamicSecretStatus](#vaultdynamicsecretstatus)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `id` _string_ | ID of the Vault secret. | | |
| `duration` _integer_ | LeaseDuration of the Vault secret. | | |
| `renewable` _boolean_ | Renewable Vault secret lease | | |
| `requestID` _string_ | RequestID of the Vault secret request. | | |
#### VaultStaticCredsMetaData
_Appears in:_
- [VaultDynamicSecretStatus](#vaultdynamicsecretstatus)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `lastVaultRotation` _integer_ | LastVaultRotation represents the last time Vault rotated the password | | |
| `rotationPeriod` _integer_ | RotationPeriod is number in seconds between each rotation, effectively a<br />"time to live". This value is compared to the LastVaultRotation to<br />determine if a password needs to be rotated | | |
| `rotationSchedule` _string_ | RotationSchedule is a "cron style" string representing the allowed<br />schedule for each rotation.<br />e.g. "1 0 * * *" would rotate at one minute past midnight (00:01) every<br />day. | | |
| `ttl` _integer_ | TTL is the seconds remaining before the next rotation. | | |
#### VaultStaticSecret
VaultStaticSecret is the Schema for the vaultstaticsecrets API
_Appears in:_
- [VaultStaticSecretList](#vaultstaticsecretlist)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultStaticSecret` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[VaultStaticSecretSpec](#vaultstaticsecretspec)_ | | | |
#### VaultStaticSecretList
VaultStaticSecretList contains a list of VaultStaticSecret
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `secrets.hashicorp.com/v1beta1` | | |
| `kind` _string_ | `VaultStaticSecretList` | | |
| `metadata` _[ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `items` _[VaultStaticSecret](#vaultstaticsecret) array_ | | | |
#### VaultStaticSecretSpec
VaultStaticSecretSpec defines the desired state of VaultStaticSecret
_Appears in:_
- [VaultStaticSecret](#vaultstaticsecret)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `vaultAuthRef` _string_ | VaultAuthRef to the VaultAuth resource, can be prefixed with a namespace,<br />eg: `namespaceA/vaultAuthRefB`. If no namespace prefix is provided it will default to the<br />namespace of the VaultAuth CR. If no value is specified for VaultAuthRef the Operator will<br />default to the `default` VaultAuth, configured in the operator's namespace. | | |
| `namespace` _string_ | Namespace of the secrets engine mount in Vault. If not set, the namespace that's<br />part of VaultAuth resource will be inferred. | | |
| `mount` _string_ | Mount for the secret in Vault | | |
| `path` _string_ | Path of the secret in Vault, corresponds to the `path` parameter for,<br />[kv-v1](/vault/api-docs/secret/kv/kv-v1#read-secret) [kv-v2](/vault/api-docs/secret/kv/kv-v2#read-secret-version) | | |
| `version` _integer_ | Version of the secret to fetch. Only valid for type kv-v2. Corresponds to version query parameter:<br />[version](/vault/api-docs/secret/kv/kv-v2#version) | | Minimum: 0 <br /> |
| `type` _string_ | Type of the Vault static secret | | Enum: [kv-v1 kv-v2] <br /> |
| `refreshAfter` _string_ | RefreshAfter a period of time, in duration notation e.g. 30s, 1m, 24h | | Pattern: `^([0-9]+(\.[0-9]+)?(s|m|h))$` <br />Type: string <br /> |
| `hmacSecretData` _boolean_ | HMACSecretData determines whether the Operator computes the<br />HMAC of the Secret's data. The MAC value will be stored in<br />the resource's Status.SecretMac field, and will be used for drift detection<br />and during incoming Vault secret comparison.<br />Enabling this feature is recommended to ensure that Secret's data stays consistent with Vault. | true | |
| `rolloutRestartTargets` _[RolloutRestartTarget](#rolloutrestarttarget) array_ | RolloutRestartTargets should be configured whenever the application(s) consuming the Vault secret does<br />not support dynamically reloading a rotated secret.<br />In that case one, or more RolloutRestartTarget(s) can be configured here. The Operator will<br />trigger a "rollout-restart" for each target whenever the Vault secret changes between reconciliation events.<br />All configured targets wil be ignored if HMACSecretData is set to false.<br />See RolloutRestartTarget for more details. | | |
| `destination` _[Destination](#destination)_ | Destination provides configuration necessary for syncing the Vault secret to Kubernetes. | | |
| `syncConfig` _[SyncConfig](#syncconfig)_ | SyncConfig configures sync behavior from Vault to VSO | | | | vault | layout docs page title Vault Secrets Operator API Reference description The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets copied from docs api api reference md in the vault secrets operator repo commit SHA 08a6e5071ffa4faa486bd4b2c53b27585da4680c API Reference Packages secrets hashicorp com v1beta1 secretshashicorpcomv1beta1 secrets hashicorp com v1beta1 Package v1beta1 contains API Schema definitions for the secrets v1beta1 API group Resource Types HCPAuth hcpauth HCPAuthList hcpauthlist HCPVaultSecretsApp hcpvaultsecretsapp HCPVaultSecretsAppList hcpvaultsecretsapplist SecretTransformation secrettransformation SecretTransformationList secrettransformationlist VaultAuth vaultauth VaultAuthGlobal vaultauthglobal VaultAuthGlobalList vaultauthgloballist VaultAuthList vaultauthlist VaultConnection vaultconnection VaultConnectionList vaultconnectionlist VaultDynamicSecret vaultdynamicsecret VaultDynamicSecretList vaultdynamicsecretlist VaultPKISecret vaultpkisecret VaultPKISecretList vaultpkisecretlist VaultStaticSecret vaultstaticsecret VaultStaticSecretList vaultstaticsecretlist Destination Destination provides the configuration that will be applied to the destination Kubernetes Secret during a Vault Secret K8s Secret sync Appears in HCPVaultSecretsAppSpec hcpvaultsecretsappspec VaultDynamicSecretSpec vaultdynamicsecretspec VaultPKISecretSpec vaultpkisecretspec VaultStaticSecretSpec vaultstaticsecretspec Field Description Default Validation name string Name of the Secret create boolean Create the destination Secret br If the Secret already exists this should be set to false false overwrite boolean Overwrite the destination Secret if it exists and Create is true This is br useful when migrating to VSO from a previous secret deployment strategy false labels object keys string values string Labels to apply to the Secret Requires Create to be set to true annotations object keys string values string Annotations to apply to the Secret Requires Create to be set to true type SecretType https kubernetes io docs reference generated kubernetes api v1 24 secrettype v1 core Type of Kubernetes Secret Requires Create to be set to true br Defaults to Opaque transformation Transformation transformation Transformation provides configuration for transforming the secret data before br it is stored in the Destination HCPAuth HCPAuth is the Schema for the hcpauths API Appears in HCPAuthList hcpauthlist Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string HCPAuth metadata ObjectMeta https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Refer to Kubernetes API documentation for fields of metadata spec HCPAuthSpec hcpauthspec HCPAuthList HCPAuthList contains a list of HCPAuth Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string HCPAuthList metadata ListMeta https kubernetes io docs reference generated kubernetes api v1 24 listmeta v1 meta Refer to Kubernetes API documentation for fields of metadata items HCPAuth hcpauth array HCPAuthServicePrincipal HCPAuthServicePrincipal provides HCPAuth configuration options needed for authenticating to HCP using a service principal configured in SecretRef Appears in HCPAuthSpec hcpauthspec Field Description Default Validation secretRef string SecretRef is the name of a Kubernetes secret in the consumer s br VDS VSS PKI HCP namespace which provides the HCP ServicePrincipal clientID br and clientSecret br The secret data must have the following structure br clientID clientID br clientSecret clientSecret br HCPAuthSpec HCPAuthSpec defines the desired state of HCPAuth Appears in HCPAuth hcpauth Field Description Default Validation organizationID string OrganizationID of the HCP organization projectID string ProjectID of the HCP project allowedNamespaces string array AllowedNamespaces Kubernetes Namespaces which are allow listed for use with this AuthMethod br This field allows administrators to customize which Kubernetes namespaces are authorized to br use with this AuthMethod While Vault will still enforce its own rules this has the added br configurability of restricting which HCPAuthMethods can be used by which namespaces br Accepted values br wildcard all namespaces br a b list of namespaces br unset disallow all namespaces except the Operator s the HCPAuthMethod s namespace this br is the default behavior method string Method to use when authenticating to Vault servicePrincipal Enum servicePrincipal br servicePrincipal HCPAuthServicePrincipal hcpauthserviceprincipal ServicePrincipal provides the necessary configuration for authenticating to br HCP using a service principal For security reasons only project level br service principals should ever be used HCPVaultSecretsApp HCPVaultSecretsApp is the Schema for the hcpvaultsecretsapps API Appears in HCPVaultSecretsAppList hcpvaultsecretsapplist Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string HCPVaultSecretsApp metadata ObjectMeta https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Refer to Kubernetes API documentation for fields of metadata spec HCPVaultSecretsAppSpec hcpvaultsecretsappspec HCPVaultSecretsAppList HCPVaultSecretsAppList contains a list of HCPVaultSecretsApp Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string HCPVaultSecretsAppList metadata ListMeta https kubernetes io docs reference generated kubernetes api v1 24 listmeta v1 meta Refer to Kubernetes API documentation for fields of metadata items HCPVaultSecretsApp hcpvaultsecretsapp array HCPVaultSecretsAppSpec HCPVaultSecretsAppSpec defines the desired state of HCPVaultSecretsApp Appears in HCPVaultSecretsApp hcpvaultsecretsapp Field Description Default Validation appName string AppName of the Vault Secrets Application that is to be synced hcpAuthRef string HCPAuthRef to the HCPAuth resource can be prefixed with a namespace eg br namespaceA vaultAuthRefB If no namespace prefix is provided it will default br to the namespace of the HCPAuth CR If no value is specified for HCPAuthRef the br Operator will default to the default HCPAuth configured in the operator s br namespace refreshAfter string RefreshAfter a period of time in duration notation e g 30s 1m 24h 600s Pattern 0 9 0 9 s m h br Type string br rolloutRestartTargets RolloutRestartTarget rolloutrestarttarget array RolloutRestartTargets should be configured whenever the application s br consuming the HCP Vault Secrets App does not support dynamically reloading a br rotated secret In that case one or more RolloutRestartTarget s can be br configured here The Operator will trigger a rollout restart for each target br whenever the Vault secret changes between reconciliation events See br RolloutRestartTarget for more details destination Destination destination Destination provides configuration necessary for syncing the HCP Vault br Application secrets to Kubernetes syncConfig HVSSyncConfig hvssyncconfig SyncConfig configures sync behavior from HVS to VSO HVSDynamicStatus HVSDynamicStatus defines the observed state of a dynamic secret within an HCP Vault Secrets App Appears in HCPVaultSecretsAppStatus hcpvaultsecretsappstatus Field Description Default Validation name string Name of the dynamic secret createdAt string CreatedAt is the timestamp string of when the dynamic secret was created expiresAt string ExpiresAt is the timestamp string of when the dynamic secret will expire ttl string TTL is the time to live of the dynamic secret in seconds HVSDynamicSyncConfig HVSDynamicSyncConfig configures sync behavior for HVS dynamic secrets Appears in HVSSyncConfig hvssyncconfig Field Description Default Validation renewalPercent integer RenewalPercent is the percent out of 100 of a dynamic secret s TTL when br new secrets are generated Defaults to 67 percent plus up to 10 jitter 67 Maximum 90 br Minimum 0 br HVSSyncConfig HVSSyncConfig configures sync behavior from HVS to VSO Appears in HCPVaultSecretsAppSpec hcpvaultsecretsappspec Field Description Default Validation dynamic HVSDynamicSyncConfig hvsdynamicsyncconfig Dynamic configures sync behavior for dynamic secrets MergeStrategy MergeStrategy provides the configuration for merging HTTP headers and parameters from the referring VaultAuth resource and its VaultAuthGlobal resource Appears in VaultAuthGlobalRef vaultauthglobalref Field Description Default Validation headers string Headers configures the merge strategy for HTTP headers that are included in br all Vault requests Choices are union replace or none br br If union is set the headers from the VaultAuthGlobal and VaultAuth br resources are merged The headers from the VaultAuth always take precedence br br If replace is set the first set of non empty headers taken in order from br VaultAuth VaultAuthGlobal auth method VaultGlobal default headers br br If none is set the headers from the br VaultAuthGlobal resource are ignored and only the headers from the VaultAuth br resource are used The default is none Enum union replace none br params string Params configures the merge strategy for HTTP parameters that are included in br all Vault requests Choices are union replace or none br br If union is set the parameters from the VaultAuthGlobal and VaultAuth br resources are merged The parameters from the VaultAuth always take br precedence br br If replace is set the first set of non empty parameters taken in order from br VaultAuth VaultAuthGlobal auth method VaultGlobal default parameters br br If none is set the parameters from the VaultAuthGlobal resource are ignored br and only the parameters from the VaultAuth resource are used The default is br none Enum union replace none br RolloutRestartTarget RolloutRestartTarget provides the configuration required to perform a rollout restart of the supported resources upon Vault Secret rotation The rollout restart is triggered by patching the target resource s spec template metadata annotations to include vso secrets hashicorp com restartedAt with a timestamp value of when the trigger was executed E g vso secrets hashicorp com restartedAt 2023 03 23T13 39 31Z Supported resources Deployment DaemonSet StatefulSet argo Rollout Appears in HCPVaultSecretsAppSpec hcpvaultsecretsappspec VaultDynamicSecretSpec vaultdynamicsecretspec VaultPKISecretSpec vaultpkisecretspec VaultStaticSecretSpec vaultstaticsecretspec Field Description Default Validation kind string Kind of the resource Enum Deployment DaemonSet StatefulSet argo Rollout br name string Name of the resource SecretTransformation SecretTransformation is the Schema for the secrettransformations API Appears in SecretTransformationList secrettransformationlist Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string SecretTransformation metadata ObjectMeta https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Refer to Kubernetes API documentation for fields of metadata spec SecretTransformationSpec secrettransformationspec SecretTransformationList SecretTransformationList contains a list of SecretTransformation Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string SecretTransformationList metadata ListMeta https kubernetes io docs reference generated kubernetes api v1 24 listmeta v1 meta Refer to Kubernetes API documentation for fields of metadata items SecretTransformation secrettransformation array SecretTransformationSpec SecretTransformationSpec defines the desired state of SecretTransformation Appears in SecretTransformation secrettransformation Field Description Default Validation templates object keys string values Template template Templates maps a template name to its Template Templates are always included br in the rendered K8s Secret with the specified key sourceTemplates SourceTemplate sourcetemplate array SourceTemplates are never included in the rendered K8s Secret they can be br used to provide common template definitions etc includes string array Includes contains regex patterns used to filter top level source secret data br fields for inclusion in the final K8s Secret data These pattern filters are br never applied to templated fields as defined in Templates They are always br applied last excludes string array Excludes contains regex patterns used to filter top level source secret data br fields for exclusion from the final K8s Secret data These pattern filters are br never applied to templated fields as defined in Templates They are always br applied before any inclusion patterns To exclude all source secret data br fields you can configure the single pattern SourceTemplate SourceTemplate provides source templating configuration Appears in SecretTransformationSpec secrettransformationspec Field Description Default Validation name string text string Text contains the Go text template format The template br references attributes from the data structure of the source secret br Refer to https pkg go dev text template for more information StorageEncryption StorageEncryption provides the necessary configuration need to encrypt the storage cache entries using Vault s Transit engine Appears in VaultAuthSpec vaultauthspec Field Description Default Validation mount string Mount path of the Transit engine in Vault keyName string KeyName to use for encrypt decrypt operations via Vault Transit SyncConfig SyncConfig configures sync behavior from Vault to VSO Appears in VaultStaticSecretSpec vaultstaticsecretspec Field Description Default Validation instantUpdates boolean InstantUpdates is a flag to indicate that event driven updates are br enabled for this VaultStaticSecret Template Template provides templating configuration Appears in SecretTransformationSpec secrettransformationspec Transformation transformation Field Description Default Validation name string Name of the Template text string Text contains the Go text template format The template br references attributes from the data structure of the source secret br Refer to https pkg go dev text template for more information TemplateRef TemplateRef points to templating text that is stored in a SecretTransformation custom resource Appears in TransformationRef transformationref Field Description Default Validation name string Name of the Template in SecretTransformationSpec Templates br the rendered secret data keyOverride string KeyOverride to the rendered template in the Destination secret If Key is br empty then the Key from reference spec will be used Set this to override the br Key set from the reference spec Transformation Appears in Destination destination Field Description Default Validation templates object keys string values Template template Templates maps a template name to its Template Templates are always included br in the rendered K8s Secret and take precedence over templates defined in a br SecretTransformation transformationRefs TransformationRef transformationref array TransformationRefs contain references to template configuration from br SecretTransformation includes string array Includes contains regex patterns used to filter top level source secret data br fields for inclusion in the final K8s Secret data These pattern filters are br never applied to templated fields as defined in Templates They are always br applied last excludes string array Excludes contains regex patterns used to filter top level source secret data br fields for exclusion from the final K8s Secret data These pattern filters are br never applied to templated fields as defined in Templates They are always br applied before any inclusion patterns To exclude all source secret data br fields you can configure the single pattern excludeRaw boolean ExcludeRaw data from the destination Secret Exclusion policy can be set br globally by including exclude raw in the global transformation options br command line flag If set the command line flag always takes precedence over br this configuration TransformationRef TransformationRef contains the configuration for accessing templates from an SecretTransformation resource TransformationRefs can be shared across all syncable secret custom resources Appears in Transformation transformation Field Description Default Validation namespace string Namespace of the SecretTransformation resource name string Name of the SecretTransformation resource templateRefs TemplateRef templateref array TemplateRefs map to a Template found in this TransformationRef If empty then br all templates from the SecretTransformation will be rendered to the K8s Secret ignoreIncludes boolean IgnoreIncludes controls whether to use the SecretTransformation s Includes br data key filters ignoreExcludes boolean IgnoreExcludes controls whether to use the SecretTransformation s Excludes br data key filters VaultAuth VaultAuth is the Schema for the vaultauths API Appears in VaultAuthList vaultauthlist Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultAuth metadata ObjectMeta https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Refer to Kubernetes API documentation for fields of metadata spec VaultAuthSpec vaultauthspec VaultAuthConfigAWS VaultAuthConfigAWS provides VaultAuth configuration options needed for authenticating to Vault via an AWS AuthMethod Will use creds from SecretRef or IRSAServiceAccount if provided in that order If neither are provided the underlying node role or instance profile will be used to authenticate to Vault Appears in VaultAuthGlobalConfigAWS vaultauthglobalconfigaws VaultAuthSpec vaultauthspec Field Description Default Validation role string Vault role to use for authenticating region string AWS Region to use for signing the authentication request headerValue string The Vault header value to include in the STS signing request sessionName string The role session name to use when creating a webidentity provider stsEndpoint string The STS endpoint to use if not set will use the default iamEndpoint string The IAM endpoint to use if not set will use the default secretRef string SecretRef is the name of a Kubernetes Secret in the consumer s VDS VSS PKI namespace br which holds credentials for AWS Expected keys include access key id secret access key br session token irsaServiceAccount string IRSAServiceAccount name to use with IAM Roles for Service Accounts br IRSA and should be annotated with eks amazonaws com role arn This br ServiceAccount will be checked for other EKS annotations br eks amazonaws com audience and eks amazonaws com token expiration VaultAuthConfigAppRole VaultAuthConfigAppRole provides VaultAuth configuration options needed for authenticating to Vault via an AppRole AuthMethod Appears in VaultAuthGlobalConfigAppRole vaultauthglobalconfigapprole VaultAuthSpec vaultauthspec Field Description Default Validation roleId string RoleID of the AppRole Role to use for authenticating to Vault secretRef string SecretRef is the name of a Kubernetes secret in the consumer s VDS VSS PKI namespace which br provides the AppRole Role s SecretID The secret must have a key named id which holds the br AppRole Role s secretID VaultAuthConfigGCP VaultAuthConfigGCP provides VaultAuth configuration options needed for authenticating to Vault via a GCP AuthMethod using workload identity Appears in VaultAuthGlobalConfigGCP vaultauthglobalconfiggcp VaultAuthSpec vaultauthspec Field Description Default Validation role string Vault role to use for authenticating workloadIdentityServiceAccount string WorkloadIdentityServiceAccount is the name of a Kubernetes service br account in the same Kubernetes namespace as the Vault Secret referencing br this resource which has been configured for workload identity in GKE br Should be annotated with iam gke io gcp service account region string GCP Region of the GKE cluster s identity provider Defaults to the region br returned from the operator pod s local metadata server clusterName string GKE cluster name Defaults to the cluster name returned from the operator br pod s local metadata server projectID string GCP project ID Defaults to the project id returned from the operator br pod s local metadata server VaultAuthConfigJWT VaultAuthConfigJWT provides VaultAuth configuration options needed for authenticating to Vault Appears in VaultAuthGlobalConfigJWT vaultauthglobalconfigjwt VaultAuthSpec vaultauthspec Field Description Default Validation role string Role to use for authenticating to Vault secretRef string SecretRef is the name of a Kubernetes secret in the consumer s VDS VSS PKI namespace which br provides the JWT token to authenticate to Vault s JWT authentication backend The secret must br have a key named jwt which holds the JWT token serviceAccount string ServiceAccount to use when creating a ServiceAccount token to authenticate to Vault s br JWT authentication backend audiences string array TokenAudiences to include in the ServiceAccount token tokenExpirationSeconds integer TokenExpirationSeconds to set the ServiceAccount token 600 Minimum 600 br VaultAuthConfigKubernetes VaultAuthConfigKubernetes provides VaultAuth configuration options needed for authenticating to Vault Appears in VaultAuthGlobalConfigKubernetes vaultauthglobalconfigkubernetes VaultAuthSpec vaultauthspec Field Description Default Validation role string Role to use for authenticating to Vault serviceAccount string ServiceAccount to use when authenticating to Vault s br authentication backend This must reside in the consuming secret s VDS VSS PKI namespace audiences string array TokenAudiences to include in the ServiceAccount token tokenExpirationSeconds integer TokenExpirationSeconds to set the ServiceAccount token 600 Minimum 600 br VaultAuthGlobal VaultAuthGlobal is the Schema for the vaultauthglobals API Appears in VaultAuthGlobalList vaultauthgloballist Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultAuthGlobal metadata ObjectMeta https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Refer to Kubernetes API documentation for fields of metadata spec VaultAuthGlobalSpec vaultauthglobalspec VaultAuthGlobalConfigAWS Appears in VaultAuthGlobalSpec vaultauthglobalspec Field Description Default Validation role string Vault role to use for authenticating region string AWS Region to use for signing the authentication request headerValue string The Vault header value to include in the STS signing request sessionName string The role session name to use when creating a webidentity provider stsEndpoint string The STS endpoint to use if not set will use the default iamEndpoint string The IAM endpoint to use if not set will use the default secretRef string SecretRef is the name of a Kubernetes Secret in the consumer s VDS VSS PKI namespace br which holds credentials for AWS Expected keys include access key id secret access key br session token irsaServiceAccount string IRSAServiceAccount name to use with IAM Roles for Service Accounts br IRSA and should be annotated with eks amazonaws com role arn This br ServiceAccount will be checked for other EKS annotations br eks amazonaws com audience and eks amazonaws com token expiration namespace string Namespace to auth to in Vault mount string Mount to use when authenticating to auth method params object keys string values string Params to use when authenticating to Vault headers object keys string values string Headers to be included in all Vault requests VaultAuthGlobalConfigAppRole Appears in VaultAuthGlobalSpec vaultauthglobalspec Field Description Default Validation roleId string RoleID of the AppRole Role to use for authenticating to Vault secretRef string SecretRef is the name of a Kubernetes secret in the consumer s VDS VSS PKI namespace which br provides the AppRole Role s SecretID The secret must have a key named id which holds the br AppRole Role s secretID namespace string Namespace to auth to in Vault mount string Mount to use when authenticating to auth method params object keys string values string Params to use when authenticating to Vault headers object keys string values string Headers to be included in all Vault requests VaultAuthGlobalConfigGCP Appears in VaultAuthGlobalSpec vaultauthglobalspec Field Description Default Validation role string Vault role to use for authenticating workloadIdentityServiceAccount string WorkloadIdentityServiceAccount is the name of a Kubernetes service br account in the same Kubernetes namespace as the Vault Secret referencing br this resource which has been configured for workload identity in GKE br Should be annotated with iam gke io gcp service account region string GCP Region of the GKE cluster s identity provider Defaults to the region br returned from the operator pod s local metadata server clusterName string GKE cluster name Defaults to the cluster name returned from the operator br pod s local metadata server projectID string GCP project ID Defaults to the project id returned from the operator br pod s local metadata server namespace string Namespace to auth to in Vault mount string Mount to use when authenticating to auth method params object keys string values string Params to use when authenticating to Vault headers object keys string values string Headers to be included in all Vault requests VaultAuthGlobalConfigJWT Appears in VaultAuthGlobalSpec vaultauthglobalspec Field Description Default Validation role string Role to use for authenticating to Vault secretRef string SecretRef is the name of a Kubernetes secret in the consumer s VDS VSS PKI namespace which br provides the JWT token to authenticate to Vault s JWT authentication backend The secret must br have a key named jwt which holds the JWT token serviceAccount string ServiceAccount to use when creating a ServiceAccount token to authenticate to Vault s br JWT authentication backend audiences string array TokenAudiences to include in the ServiceAccount token tokenExpirationSeconds integer TokenExpirationSeconds to set the ServiceAccount token 600 Minimum 600 br namespace string Namespace to auth to in Vault mount string Mount to use when authenticating to auth method params object keys string values string Params to use when authenticating to Vault headers object keys string values string Headers to be included in all Vault requests VaultAuthGlobalConfigKubernetes Appears in VaultAuthGlobalSpec vaultauthglobalspec Field Description Default Validation role string Role to use for authenticating to Vault serviceAccount string ServiceAccount to use when authenticating to Vault s br authentication backend This must reside in the consuming secret s VDS VSS PKI namespace audiences string array TokenAudiences to include in the ServiceAccount token tokenExpirationSeconds integer TokenExpirationSeconds to set the ServiceAccount token 600 Minimum 600 br namespace string Namespace to auth to in Vault mount string Mount to use when authenticating to auth method params object keys string values string Params to use when authenticating to Vault headers object keys string values string Headers to be included in all Vault requests VaultAuthGlobalList VaultAuthGlobalList contains a list of VaultAuthGlobal Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultAuthGlobalList metadata ListMeta https kubernetes io docs reference generated kubernetes api v1 24 listmeta v1 meta Refer to Kubernetes API documentation for fields of metadata items VaultAuthGlobal vaultauthglobal array VaultAuthGlobalRef VaultAuthGlobalRef is a reference to a VaultAuthGlobal resource A referring VaultAuth resource can use the VaultAuthGlobal resource to share common configuration across multiple VaultAuth resources The VaultAuthGlobal resource is used to store global configuration for VaultAuth resources Appears in VaultAuthSpec vaultauthspec Field Description Default Validation name string Name of the VaultAuthGlobal resource Pattern a z0 9 1 253 br namespace string Namespace of the VaultAuthGlobal resource If not provided the namespace of br the referring VaultAuth resource is used Pattern a z0 9 1 253 br mergeStrategy MergeStrategy mergestrategy MergeStrategy configures the merge strategy for HTTP headers and parameters br that are included in all Vault authentication requests allowDefault boolean AllowDefault when set to true will use the default VaultAuthGlobal resource br as the default if Name is not set The allow default globals option must be br set on the operator s global vault auth options flag br br The default VaultAuthGlobal search is conditional br When a ref Namespace is set the search for the default br VaultAuthGlobal resource is constrained to that namespace br Otherwise the search order is br 1 The default VaultAuthGlobal resource in the referring VaultAuth resource s br namespace br 2 The default VaultAuthGlobal resource in the Operator s namespace VaultAuthGlobalSpec VaultAuthGlobalSpec defines the desired state of VaultAuthGlobal Appears in VaultAuthGlobal vaultauthglobal Field Description Default Validation allowedNamespaces string array AllowedNamespaces Kubernetes Namespaces which are allow listed for use with br this VaultAuthGlobal This field allows administrators to customize which br Kubernetes namespaces are authorized to reference this resource While Vault br will still enforce its own rules this has the added configurability of br restricting which VaultAuthMethods can be used by which namespaces Accepted br values wildcard all namespaces a b list of namespaces br unset disallow all namespaces except the Operator s and the referring br VaultAuthMethod s namespace this is the default behavior vaultConnectionRef string VaultConnectionRef to the VaultConnection resource can be prefixed with a namespace br eg namespaceA vaultConnectionRefB If no namespace prefix is provided it will default to br the namespace of the VaultConnection CR If no value is specified for VaultConnectionRef the br Operator will default to the default VaultConnection configured in the operator s namespace defaultVaultNamespace string DefaultVaultNamespace to auth to in Vault if not specified the namespace of the auth br method will be used This can be used as a default Vault namespace for all br auth methods defaultAuthMethod string DefaultAuthMethod to use when authenticating to Vault Enum kubernetes jwt appRole aws gcp br defaultMount string DefaultMount to use when authenticating to auth method If not specified the mount of br the auth method configured in Vault will be used params object keys string values string DefaultParams to use when authenticating to Vault headers object keys string values string DefaultHeaders to be included in all Vault requests kubernetes VaultAuthGlobalConfigKubernetes vaultauthglobalconfigkubernetes Kubernetes specific auth configuration requires that the Method be set to kubernetes appRole VaultAuthGlobalConfigAppRole vaultauthglobalconfigapprole AppRole specific auth configuration requires that the Method be set to appRole jwt VaultAuthGlobalConfigJWT vaultauthglobalconfigjwt JWT specific auth configuration requires that the Method be set to jwt aws VaultAuthGlobalConfigAWS vaultauthglobalconfigaws AWS specific auth configuration requires that Method be set to aws gcp VaultAuthGlobalConfigGCP vaultauthglobalconfiggcp GCP specific auth configuration requires that Method be set to gcp VaultAuthList VaultAuthList contains a list of VaultAuth Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultAuthList metadata ListMeta https kubernetes io docs reference generated kubernetes api v1 24 listmeta v1 meta Refer to Kubernetes API documentation for fields of metadata items VaultAuth vaultauth array VaultAuthSpec VaultAuthSpec defines the desired state of VaultAuth Appears in VaultAuth vaultauth Field Description Default Validation vaultConnectionRef string VaultConnectionRef to the VaultConnection resource can be prefixed with a namespace br eg namespaceA vaultConnectionRefB If no namespace prefix is provided it will default to br the namespace of the VaultConnection CR If no value is specified for VaultConnectionRef the br Operator will default to the default VaultConnection configured in the operator s namespace vaultAuthGlobalRef VaultAuthGlobalRef vaultauthglobalref VaultAuthGlobalRef namespace string Namespace to auth to in Vault allowedNamespaces string array AllowedNamespaces Kubernetes Namespaces which are allow listed for use with this AuthMethod br This field allows administrators to customize which Kubernetes namespaces are authorized to br use with this AuthMethod While Vault will still enforce its own rules this has the added br configurability of restricting which VaultAuthMethods can be used by which namespaces br Accepted values br wildcard all namespaces br a b list of namespaces br unset disallow all namespaces except the Operator s the VaultAuthMethod s namespace this br is the default behavior method string Method to use when authenticating to Vault Enum kubernetes jwt appRole aws gcp br mount string Mount to use when authenticating to auth method params object keys string values string Params to use when authenticating to Vault headers object keys string values string Headers to be included in all Vault requests kubernetes VaultAuthConfigKubernetes vaultauthconfigkubernetes Kubernetes specific auth configuration requires that the Method be set to kubernetes appRole VaultAuthConfigAppRole vaultauthconfigapprole AppRole specific auth configuration requires that the Method be set to appRole jwt VaultAuthConfigJWT vaultauthconfigjwt JWT specific auth configuration requires that the Method be set to jwt aws VaultAuthConfigAWS vaultauthconfigaws AWS specific auth configuration requires that Method be set to aws gcp VaultAuthConfigGCP vaultauthconfiggcp GCP specific auth configuration requires that Method be set to gcp storageEncryption StorageEncryption storageencryption StorageEncryption provides the necessary configuration to encrypt the client storage cache br This should only be configured when client cache persistence with encryption is enabled br This is done by passing setting the manager s commandline argument br client cache persistence model direct encrypted Typically there should only ever br be one VaultAuth configured with StorageEncryption in the Cluster and it should have br the label cacheStorageEncryption true VaultClientMeta VaultClientMeta defines the observed state of the last Vault Client used to sync the secret This status is used during resource reconciliation Appears in VaultDynamicSecretStatus vaultdynamicsecretstatus Field Description Default Validation cacheKey string CacheKey is the unique key used to identify the client cache id string ID is the Vault ID of the authenticated client The ID should never contain br any sensitive information VaultConnection VaultConnection is the Schema for the vaultconnections API Appears in VaultConnectionList vaultconnectionlist Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultConnection metadata ObjectMeta https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Refer to Kubernetes API documentation for fields of metadata spec VaultConnectionSpec vaultconnectionspec VaultConnectionList VaultConnectionList contains a list of VaultConnection Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultConnectionList metadata ListMeta https kubernetes io docs reference generated kubernetes api v1 24 listmeta v1 meta Refer to Kubernetes API documentation for fields of metadata items VaultConnection vaultconnection array VaultConnectionSpec VaultConnectionSpec defines the desired state of VaultConnection Appears in VaultConnection vaultconnection Field Description Default Validation address string Address of the Vault server headers object keys string values string Headers to be included in all Vault requests tlsServerName string TLSServerName to use as the SNI host for TLS connections caCertSecretRef string CACertSecretRef is the name of a Kubernetes secret containing the trusted PEM encoded CA certificate chain as ca crt skipTLSVerify boolean SkipTLSVerify for TLS connections false timeout string Timeout applied to all Vault requests for this connection If not set the br default timeout from the Vault API client config is used Pattern 0 9 0 9 s m h br Type string br VaultDynamicSecret VaultDynamicSecret is the Schema for the vaultdynamicsecrets API Appears in VaultDynamicSecretList vaultdynamicsecretlist Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultDynamicSecret metadata ObjectMeta https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Refer to Kubernetes API documentation for fields of metadata spec VaultDynamicSecretSpec vaultdynamicsecretspec VaultDynamicSecretList VaultDynamicSecretList contains a list of VaultDynamicSecret Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultDynamicSecretList metadata ListMeta https kubernetes io docs reference generated kubernetes api v1 24 listmeta v1 meta Refer to Kubernetes API documentation for fields of metadata items VaultDynamicSecret vaultdynamicsecret array VaultDynamicSecretSpec VaultDynamicSecretSpec defines the desired state of VaultDynamicSecret Appears in VaultDynamicSecret vaultdynamicsecret Field Description Default Validation vaultAuthRef string VaultAuthRef to the VaultAuth resource can be prefixed with a namespace br eg namespaceA vaultAuthRefB If no namespace prefix is provided it will default to br the namespace of the VaultAuth CR If no value is specified for VaultAuthRef the Operator br will default to the default VaultAuth configured in the operator s namespace namespace string Namespace of the secrets engine mount in Vault If not set the namespace that s br part of VaultAuth resource will be inferred mount string Mount path of the secret s engine in Vault requestHTTPMethod string RequestHTTPMethod to use when syncing Secrets from Vault br Setting a value here is not typically required br If left unset the Operator will make requests using the GET method br In the case where Params are specified the Operator will use the PUT method br Please consult secrets vault docs secrets if you are br uncertain about what method to use br Of note the Vault client treats PUT and POST as being equivalent br The underlying Vault client implementation will always use the PUT method Enum GET POST PUT br path string Path in Vault to get the credentials for and is relative to Mount br Please consult secrets vault docs secrets if you are br uncertain about what path should be set to params object keys string values string Params that can be passed when requesting credentials secrets br When Params is set the configured RequestHTTPMethod will be br ignored See RequestHTTPMethod for more details br Please consult secrets vault docs secrets if you are br uncertain about what params should can be set to renewalPercent integer RenewalPercent is the percent out of 100 of the lease duration when the br lease is renewed Defaults to 67 percent plus jitter 67 Maximum 90 br Minimum 0 br revoke boolean Revoke the existing lease on VDS resource deletion allowStaticCreds boolean AllowStaticCreds should be set when syncing credentials that are periodically br rotated by the Vault server rather than created upon request These secrets br are sometimes referred to as static roles or static credentials with a br request path that contains static creds rolloutRestartTargets RolloutRestartTarget rolloutrestarttarget array RolloutRestartTargets should be configured whenever the application s consuming the Vault secret does br not support dynamically reloading a rotated secret br In that case one or more RolloutRestartTarget s can be configured here The Operator will br trigger a rollout restart for each target whenever the Vault secret changes between reconciliation events br See RolloutRestartTarget for more details destination Destination destination Destination provides configuration necessary for syncing the Vault secret to Kubernetes refreshAfter string RefreshAfter a period of time for VSO to sync the source secret data in br duration notation e g 30s 1m 24h This value only needs to be set when br syncing from a secret s engine that does not provide a lease TTL in its br response The value should be within the secret engine s configured ttl or br max ttl The source secret s lease duration takes precedence over this br configuration when it is greater than 0 Pattern 0 9 0 9 s m h br Type string br VaultPKISecret VaultPKISecret is the Schema for the vaultpkisecrets API Appears in VaultPKISecretList vaultpkisecretlist Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultPKISecret metadata ObjectMeta https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Refer to Kubernetes API documentation for fields of metadata spec VaultPKISecretSpec vaultpkisecretspec VaultPKISecretList VaultPKISecretList contains a list of VaultPKISecret Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultPKISecretList metadata ListMeta https kubernetes io docs reference generated kubernetes api v1 24 listmeta v1 meta Refer to Kubernetes API documentation for fields of metadata items VaultPKISecret vaultpkisecret array VaultPKISecretSpec VaultPKISecretSpec defines the desired state of VaultPKISecret Appears in VaultPKISecret vaultpkisecret Field Description Default Validation vaultAuthRef string VaultAuthRef to the VaultAuth resource can be prefixed with a namespace br eg namespaceA vaultAuthRefB If no namespace prefix is provided it will default to br the namespace of the VaultAuth CR If no value is specified for VaultAuthRef the Operator br will default to the default VaultAuth configured in the operator s namespace namespace string Namespace of the secrets engine mount in Vault If not set the namespace that s br part of VaultAuth resource will be inferred mount string Mount for the secret in Vault role string Role in Vault to use when issuing TLS certificates revoke boolean Revoke the certificate when the resource is deleted clear boolean Clear the Kubernetes secret when the resource is deleted expiryOffset string ExpiryOffset to use for computing when the certificate should be renewed br The rotation time will be difference between the expiration and the offset br Should be in duration notation e g 30s 120s etc Pattern 0 9 0 9 s m h br Type string br issuerRef string IssuerRef reference to an existing PKI issuer either by Vault generated br identifier the literal string default to refer to the currently br configured default issuer or the name assigned to an issuer br This parameter is part of the request URL rolloutRestartTargets RolloutRestartTarget rolloutrestarttarget array RolloutRestartTargets should be configured whenever the application s consuming the Vault secret does br not support dynamically reloading a rotated secret br In that case one or more RolloutRestartTarget s can be configured here The Operator will br trigger a rollout restart for each target whenever the Vault secret changes between reconciliation events br See RolloutRestartTarget for more details destination Destination destination Destination provides configuration necessary for syncing the Vault secret br to Kubernetes If the type is set to kubernetes io tls tls key will br be set to the private key response from Vault and tls crt will be br set to certificate ca chain from the Vault response issuing ca br is used when ca chain is empty The remove roots from chain true br option is used with Vault to exclude the root CA from the Vault response commonName string CommonName to include in the request altNames string array AltNames to include in the request br May contain both DNS names and email addresses ipSans string array IPSans to include in the request uriSans string array The requested URI SANs otherSans string array Requested other SANs in an array with the format br oid type value for each entry userIDs string array User ID OID 0 9 2342 19200300 100 1 1 Subject values to be placed on the br signed certificate ttl string TTL for the certificate sets the expiration date br If not specified the Vault role s default br backend default or system default TTL is used in that order br Cannot be larger than the mount s max TTL br Note this only has an effect when generating a CA cert or signing a CA cert br not when generating a CSR for an intermediate CA br Should be in duration notation e g 120s 2h etc Pattern 0 9 0 9 s m h br Type string br format string Format for the certificate Choices pem der pem bundle br If pem bundle br any private key and issuing cert will be appended to the certificate pem br If der the value will be base64 encoded br Default pem privateKeyFormat string PrivateKeyFormat generally the default will be controlled by the Format br parameter as either base64 encoded DER or PEM encoded DER br However this can be set to pkcs8 to have the returned br private key contain base64 encoded pkcs8 or PEM encoded br pkcs8 instead br Default der notAfter string NotAfter field of the certificate with specified date value br The value format should be given in UTC format YYYY MM ddTHH MM SSZ excludeCNFromSans boolean ExcludeCNFromSans from DNS or Email Subject Alternate Names br Default false VaultSecretLease Appears in VaultDynamicSecretStatus vaultdynamicsecretstatus Field Description Default Validation id string ID of the Vault secret duration integer LeaseDuration of the Vault secret renewable boolean Renewable Vault secret lease requestID string RequestID of the Vault secret request VaultStaticCredsMetaData Appears in VaultDynamicSecretStatus vaultdynamicsecretstatus Field Description Default Validation lastVaultRotation integer LastVaultRotation represents the last time Vault rotated the password rotationPeriod integer RotationPeriod is number in seconds between each rotation effectively a br time to live This value is compared to the LastVaultRotation to br determine if a password needs to be rotated rotationSchedule string RotationSchedule is a cron style string representing the allowed br schedule for each rotation br e g 1 0 would rotate at one minute past midnight 00 01 every br day ttl integer TTL is the seconds remaining before the next rotation VaultStaticSecret VaultStaticSecret is the Schema for the vaultstaticsecrets API Appears in VaultStaticSecretList vaultstaticsecretlist Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultStaticSecret metadata ObjectMeta https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Refer to Kubernetes API documentation for fields of metadata spec VaultStaticSecretSpec vaultstaticsecretspec VaultStaticSecretList VaultStaticSecretList contains a list of VaultStaticSecret Field Description Default Validation apiVersion string secrets hashicorp com v1beta1 kind string VaultStaticSecretList metadata ListMeta https kubernetes io docs reference generated kubernetes api v1 24 listmeta v1 meta Refer to Kubernetes API documentation for fields of metadata items VaultStaticSecret vaultstaticsecret array VaultStaticSecretSpec VaultStaticSecretSpec defines the desired state of VaultStaticSecret Appears in VaultStaticSecret vaultstaticsecret Field Description Default Validation vaultAuthRef string VaultAuthRef to the VaultAuth resource can be prefixed with a namespace br eg namespaceA vaultAuthRefB If no namespace prefix is provided it will default to the br namespace of the VaultAuth CR If no value is specified for VaultAuthRef the Operator will br default to the default VaultAuth configured in the operator s namespace namespace string Namespace of the secrets engine mount in Vault If not set the namespace that s br part of VaultAuth resource will be inferred mount string Mount for the secret in Vault path string Path of the secret in Vault corresponds to the path parameter for br kv v1 vault api docs secret kv kv v1 read secret kv v2 vault api docs secret kv kv v2 read secret version version integer Version of the secret to fetch Only valid for type kv v2 Corresponds to version query parameter br version vault api docs secret kv kv v2 version Minimum 0 br type string Type of the Vault static secret Enum kv v1 kv v2 br refreshAfter string RefreshAfter a period of time in duration notation e g 30s 1m 24h Pattern 0 9 0 9 s m h br Type string br hmacSecretData boolean HMACSecretData determines whether the Operator computes the br HMAC of the Secret s data The MAC value will be stored in br the resource s Status SecretMac field and will be used for drift detection br and during incoming Vault secret comparison br Enabling this feature is recommended to ensure that Secret s data stays consistent with Vault true rolloutRestartTargets RolloutRestartTarget rolloutrestarttarget array RolloutRestartTargets should be configured whenever the application s consuming the Vault secret does br not support dynamically reloading a rotated secret br In that case one or more RolloutRestartTarget s can be configured here The Operator will br trigger a rollout restart for each target whenever the Vault secret changes between reconciliation events br All configured targets wil be ignored if HMACSecretData is set to false br See RolloutRestartTarget for more details destination Destination destination Destination provides configuration necessary for syncing the Vault secret to Kubernetes syncConfig SyncConfig syncconfig SyncConfig configures sync behavior from Vault to VSO |
vault Utilizing advanced templating and data filters the Vault Secrets Operator for Kubernetes VSO can page title Vault Secrets Operator Secret Transformation layout docs Learn how to transform Secret data with the Vault Secrets Operator Secret data transformation | ---
layout: docs
page_title: Vault Secrets Operator Secret Transformation
description: >-
Learn how to transform Secret data with the Vault Secrets Operator.
---
# Secret data transformation
Utilizing advanced templating and data filters, the Vault Secrets Operator for Kubernetes (VSO) can
transform source secret data, secret metadata, resource labels and annotations into a format that is
compatible with your application. All secret data sources are supported. Secret transformations can
be specified directly within a secret custom resource (CR), or by references to one or more
[SecretTransformation](/vault/docs/platform/k8s/vso/api-reference#secrettransformation) custom resource
instances, or both.
## Templating
VSO utilizes the data-driven [templates for Golang](https://pkg.go.dev/text/template) to generate
secret data output. The template data input holds the secret data, secret metadata, resource labels
and annotations.
Templates are configured in a secret custom resource's
[spec.Destination.Transformation.Templates](/vault/docs/platform/k8s/vso/api-reference#transformation),
or in a SecretTransformation resource's [spec.templates](/vault/docs/platform/k8s/vso/api-reference#secrettransformationspec).
VSO provides access to a large library of template functions, some of which are documented
[below](#template-functions).
### Secret data input
Secret data is accessed through the `.Secrets` input member. It contains a map of secret
key-value pairs, which are assumed to be sensitive information fetched from a
[secret source](/vault/docs/platform/k8s/vso/sources).
For example, to include a password in your application's secret, you might specify
a template like:
```go
```
### Secret metadata input
Secret metadata is accessed through the `.Metadata` input member. It contains a map of metadata key to
its value. The data should not contain any confidential information.
For example, to include a secret metadata value in your application's secret, you might specify
a template like:
```go
```
### Resource annotations input
Resource annotations are accessed through the `.Annotations` input member. The annotations consist
of all `metadata.annotations` configured on the secret custom resource.
For example, to include a value from the resource's annotations in your application's secret, you
might specify a template like:
```go
```
### Resource labels input
Resource labels are accessed through the `.Labels` input member. The labels consist
of all `metadata.labels` configured on the secret custom resource.
For example, to include a value from the resource's labels in your application's secret, you
might specify a template like:
```go
```
## Filters
Filters are used to control which source secret data fields are included in the destination secret's
data. They are specified as a set of exclude/include RE2 accepted [regular expressions](https://golang.org/s/re2syntax).
Filters are configured in the `excludes` and `includes` fields of a secret custom resource's
[spec.Destination.Transformation](/vault/docs/platform/k8s/vso/api-reference#transformation),
or in a SecretTransformation resource's [spec](/vault/docs/platform/k8s/vso/api-reference#secrettransformationspec).
All exclude patterns take precedence over any include patterns, and are never applied to templated keys.
## Examples
### Local transformation
A VaultDynamicSecret configured to sync Postgres database credentials from Vault to the Kubernetes
secret named `example-vds`.
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
namespace: example-ns
name: example-vds
annotations:
myapp.config/postgres-host: postgres-postgresql.postgres.svc.cluster.local:5432
spec:
destination:
create: true
name: app-secret
transformation:
excludes:
- .*
templates:
url:
text: |
path: creds/dev-postgres
```
The resulting Kubernetes secret includes a single key named `url`, with a valid Postgres connection
URL as its value.
```yaml
url: postgresql://v-postgres-user:[email protected]:5432/postgres?sslmode=disable
```
### Shared transformation
The following manifest contains shared transformation templates and filters. All `templates` it provides
will be included in the destination k8s secret. It also provides `sourceTemplates` that can be included
in any template text configured in a secret CR or within the same resource instance.
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: SecretTransformation
metadata:
name: vso-templates
namespace: example-vds
spec:
excludes:
- password|username
templates:
url:
text: ''
sourceTemplates:
- name: helpers
text: |
```
The following `VaultDynamicSecret` manifest references the `SecretTransformation` above.
All templates and filters from the reference object will be applied to the destination secret data.
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
namespace: example-ns
name: example-vds
annotations:
myapp.config/postgres-host: postgres-postgresql.postgres.svc.cluster.local:5432
spec:
destination:
create: true
name: app-secret
transformation:
transformationRefs:
- name: vso-templates
path: creds/dev-postgres
```
The resulting Kubernetes secret includes a single key named `url`, with a valid Postgres connection
URL as its value
```yaml
url: postgresql://v-postgres-user:[email protected]:5432/postgres?sslmode=disable
```
### Template functions
All template functions are provided by the [sprig](http://masterminds.github.io/sprig) library. Some common functions are mentioned below.
For the complete list of functions see [allowedSprigFuncs](https://github.com/hashicorp/vault-secrets-operator/blob/main/template/funcs.go#L26)
### String functions
`trim` removes any leading or trailing whitespaces from the input:
```
trim " host " -> `host`
```
### Encoding functions
`b64enc` base64 encodes an input value
```
b64enc "host" -> `aG9zdAo=`
```
`b64dec` base64 decodes an input value
```
b64dec "aG9zdAo=" -> `host`
```
### Map functions
`get` retrieves a value from a `map` input:
```
get .Secrets "baz" -> `qux`
```
Given a nested `map` input:
```json
{
"foo": {
"bar": "baz",
"quz": "quux"
}
}
```
`get` can retrieve a specific value:
```
get (get .Secrets "foo") "bar" -> `baz`
```
`dig` can also retrieve a specific value, or return a default if any of the keys
are not found:
```
dig "foo" "quz" "<not found>" .Secrets -> `quux`
dig "foo" "nux" "<not found>" .Secrets -> `<not found>`
```
## Related API references
- [Transformation](/vault/docs/platform/k8s/vso/api-reference#transformation)
- [HCPVaultSecretsApp](/vault/docs/platform/k8s/vso/api-reference#hcpvaultsecretsapp)
- [VaultDynamicSecret](/vault/docs/platform/k8s/vso/api-reference#vaultdynamicsecret)
- [VaultPKISecret](/vault/docs/platform/k8s/vso/api-reference#vaultpkisecret)
- [VaultStaticSecret](/vault/docs/platform/k8s/vso/api-reference#vaultstaticsecret)
- [SecretTransformation](/vault/docs/platform/k8s/vso/api-reference#secrettransformation) | vault | layout docs page title Vault Secrets Operator Secret Transformation description Learn how to transform Secret data with the Vault Secrets Operator Secret data transformation Utilizing advanced templating and data filters the Vault Secrets Operator for Kubernetes VSO can transform source secret data secret metadata resource labels and annotations into a format that is compatible with your application All secret data sources are supported Secret transformations can be specified directly within a secret custom resource CR or by references to one or more SecretTransformation vault docs platform k8s vso api reference secrettransformation custom resource instances or both Templating VSO utilizes the data driven templates for Golang https pkg go dev text template to generate secret data output The template data input holds the secret data secret metadata resource labels and annotations Templates are configured in a secret custom resource s spec Destination Transformation Templates vault docs platform k8s vso api reference transformation or in a SecretTransformation resource s spec templates vault docs platform k8s vso api reference secrettransformationspec VSO provides access to a large library of template functions some of which are documented below template functions Secret data input Secret data is accessed through the Secrets input member It contains a map of secret key value pairs which are assumed to be sensitive information fetched from a secret source vault docs platform k8s vso sources For example to include a password in your application s secret you might specify a template like go Secret metadata input Secret metadata is accessed through the Metadata input member It contains a map of metadata key to its value The data should not contain any confidential information For example to include a secret metadata value in your application s secret you might specify a template like go Resource annotations input Resource annotations are accessed through the Annotations input member The annotations consist of all metadata annotations configured on the secret custom resource For example to include a value from the resource s annotations in your application s secret you might specify a template like go Resource labels input Resource labels are accessed through the Labels input member The labels consist of all metadata labels configured on the secret custom resource For example to include a value from the resource s labels in your application s secret you might specify a template like go Filters Filters are used to control which source secret data fields are included in the destination secret s data They are specified as a set of exclude include RE2 accepted regular expressions https golang org s re2syntax Filters are configured in the excludes and includes fields of a secret custom resource s spec Destination Transformation vault docs platform k8s vso api reference transformation or in a SecretTransformation resource s spec vault docs platform k8s vso api reference secrettransformationspec All exclude patterns take precedence over any include patterns and are never applied to templated keys Examples Local transformation A VaultDynamicSecret configured to sync Postgres database credentials from Vault to the Kubernetes secret named example vds yaml apiVersion secrets hashicorp com v1beta1 kind VaultDynamicSecret metadata namespace example ns name example vds annotations myapp config postgres host postgres postgresql postgres svc cluster local 5432 spec destination create true name app secret transformation excludes templates url text path creds dev postgres The resulting Kubernetes secret includes a single key named url with a valid Postgres connection URL as its value yaml url postgresql v postgres user XUpah password postgres postgresql postgres svc cluster local 5432 postgres sslmode disable Shared transformation The following manifest contains shared transformation templates and filters All templates it provides will be included in the destination k8s secret It also provides sourceTemplates that can be included in any template text configured in a secret CR or within the same resource instance yaml apiVersion secrets hashicorp com v1beta1 kind SecretTransformation metadata name vso templates namespace example vds spec excludes password username templates url text sourceTemplates name helpers text The following VaultDynamicSecret manifest references the SecretTransformation above All templates and filters from the reference object will be applied to the destination secret data yaml apiVersion secrets hashicorp com v1beta1 kind VaultDynamicSecret metadata namespace example ns name example vds annotations myapp config postgres host postgres postgresql postgres svc cluster local 5432 spec destination create true name app secret transformation transformationRefs name vso templates path creds dev postgres The resulting Kubernetes secret includes a single key named url with a valid Postgres connection URL as its value yaml url postgresql v postgres user XUpah password postgres postgresql postgres svc cluster local 5432 postgres sslmode disable Template functions All template functions are provided by the sprig http masterminds github io sprig library Some common functions are mentioned below For the complete list of functions see allowedSprigFuncs https github com hashicorp vault secrets operator blob main template funcs go L26 String functions trim removes any leading or trailing whitespaces from the input trim host host Encoding functions b64enc base64 encodes an input value b64enc host aG9zdAo b64dec base64 decodes an input value b64dec aG9zdAo host Map functions get retrieves a value from a map input get Secrets baz qux Given a nested map input json foo bar baz quz quux get can retrieve a specific value get get Secrets foo bar baz dig can also retrieve a specific value or return a default if any of the keys are not found dig foo quz not found Secrets quux dig foo nux not found Secrets not found Related API references Transformation vault docs platform k8s vso api reference transformation HCPVaultSecretsApp vault docs platform k8s vso api reference hcpvaultsecretsapp VaultDynamicSecret vault docs platform k8s vso api reference vaultdynamicsecret VaultPKISecret vault docs platform k8s vso api reference vaultpkisecret VaultStaticSecret vault docs platform k8s vso api reference vaultstaticsecret SecretTransformation vault docs platform k8s vso api reference secrettransformation |
vault The Vault Secrets Operator can be installed using Helm Installing and upgrading the Vault Secrets Operator include vso common links mdx page title Vault Secrets Operator Installation layout docs | ---
layout: docs
page_title: Vault Secrets Operator Installation
description: >-
The Vault Secrets Operator can be installed using Helm.
---
@include 'vso/common-links.mdx'
# Installing and upgrading the Vault Secrets Operator
## Prerequisites
- A Kubernetes cluster running 1.23+
- Helm 3.7+
- [Optional] Kustomize 4.5.7+
## Installation using Helm
[Install Helm](https://helm.sh/docs/intro/install) before beginning.
The [Helm chart][helm] is the recommended way of
installing and configuring the Vault Secrets Operator.
To install a new instance of the Vault Secrets Operator, first add the
HashiCorp Helm repository and ensure you have access to the chart:
```shell-session
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
```
```shell-session
$ helm search repo hashicorp/vault-secrets-operator
NAME CHART VERSION APP VERSION DESCRIPTION
hashicorp/vault-secrets-operator 0.9.0 0.9.0 Official HashiCorp Vault Secrets Operator Chart
```
Then install the Operator:
```shell-session
$ helm install --version 0.9.0 --create-namespace --namespace vault-secrets-operator vault-secrets-operator hashicorp/vault-secrets-operator
```
## Upgrading using Helm
You can upgrade an existing installation with the `helm upgrade` command.
Please always run Helm with the `--dry-run` option before any install or upgrade to verify
changes.
Update the `hashicorp` Helm repo:
```shell-session
$ helm repo update hashicorp
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "hashicorp" chart repository
Update Complete. ⎈Happy Helming!⎈
```
## Updating CRDs when using Helm
<Note title="Important">
As of VSO 0.8.0, VSO will automatically update its CRDs.
The manual upgrade step [Updating CRDs](#updating-crds-when-using-helm-prior-to-vso-0-8-0) below is no longer required when
upgrading to VSO 0.8.0+.
</Note>
The VSO Helm chart will automatically upgrade the CRDs to match the VSO version being deployed.
There should be no need to manually update the CRDs prior to upgrading VSO using Helm.
## Chart values
Refer to the [Helm chart][helm] overview for a full list of supported chart values.
## Installation using Kustomize
You can install and update your installation using `kustomize` which allows you to extend the `config/` path of the VSO repository using Kustomize primitives.
To install using Kustomize, download and untar/unzip the latest release from the [Releases Page](https://github.com/hashicorp/vault-secrets-operator/releases).
```shell-session
$ wget -q https://github.com/hashicorp/vault-secrets-operator/archive/refs/tags/v0.9.0.tar.gz
$ tar -zxf v0.9.0.tar.gz
$ cd vault-secrets-operator-0.9.0/
```
Next install using `kustomize build`:
```shell-session
$ kustomize build config/default | kubectl apply -f -
namespace/vault-secrets-operator-system created
customresourcedefinition.apiextensions.k8s.io/hcpauths.secrets.hashicorp.com created
customresourcedefinition.apiextensions.k8s.io/hcpvaultsecretsapps.secrets.hashicorp.com created
customresourcedefinition.apiextensions.k8s.io/vaultauths.secrets.hashicorp.com created
customresourcedefinition.apiextensions.k8s.io/vaultconnections.secrets.hashicorp.com created
customresourcedefinition.apiextensions.k8s.io/vaultdynamicsecrets.secrets.hashicorp.com created
customresourcedefinition.apiextensions.k8s.io/vaultpkisecrets.secrets.hashicorp.com created
customresourcedefinition.apiextensions.k8s.io/vaultstaticsecrets.secrets.hashicorp.com created
serviceaccount/vault-secrets-operator-controller-manager created
role.rbac.authorization.k8s.io/vault-secrets-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/vault-secrets-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/vault-secrets-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/vault-secrets-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/vault-secrets-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/vault-secrets-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/vault-secrets-operator-proxy-rolebinding created
configmap/vault-secrets-operator-manager-config created
service/vault-secrets-operator-controller-manager-metrics-service created
deployment.apps/vault-secrets-operator-controller-manager created
```
Confirm the operator has been installed by examining the pods:
```shell-session
$ kubectl get pods -n vault-secrets-operator-system
NAMESPACE NAME READY STATUS RESTARTS AGE
vault-secrets-operator-system vault-secrets-operator-controller-manager-56754d5496-cq69s 2/2 Running 0 1m17s
```
<Note title="Kustomize does not support all features of the Helm chart">
Notably it will not deploy default VaultAuthMethod, VaultConnection or Transit related resources.
Kustomize also does not support pre-delete hooks that the Helm chart uses to cleanup resources
and remove finalizers on the uninstall path. Please see [`config/samples`](https://github.com/hashicorp/vault-secrets-operator/tree/main/config/samples)
or `config/samples` in the downloaded release artifacts for additional resources.
</Note>
## Upgrade using Kustomize
Upgrading using Kustomize is similar to installation: simply download the new release from github and follow
the same steps as outlined in [Installation using Kustomize](#installation-using-kustomize).
No additional steps are required to update the CRDs.
## Legacy notes
The following notes provide guidance for installing/upgrading older versions of VSO.
### Updating CRDs when using Helm prior to VSO 0.8.0
This step can be skipped if you are upgrading to VSO 0.8.0 or later.
<Note title="Helm does not automatically update CRDs">
You must update all CRDs manually before upgrading VSO to a version prior to 0.8.0.
</Note>
You must update the CRDs for VSO manually **before** you upgrade the
operator when the operator is managed by Helm.
**Any `kubectl` warnings related to `last-applied-configuration` should be safe to ignore.**
To update the VSO CRDs, replace `<TARGET_VSO_VERSION>` with the VSO version you are upgrading to:
```shell-session
$ helm show crds --version <TARGET_VSO_VERSION> hashicorp/vault-secrets-operator | kubectl apply -f -
```
For example, if you are upgrading to VSO 0.7.1:
```shell-session
$ helm show crds --version 0.7.1 hashicorp/vault-secrets-operator | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/hcpauths.secrets.hashicorp.com created
customresourcedefinition.apiextensions.k8s.io/hcpvaultsecretsapps.secrets.hashicorp.com created
Warning: resource customresourcedefinitions/vaultauths.secrets.hashicorp.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/vaultauths.secrets.hashicorp.com configured
Warning: resource customresourcedefinitions/vaultconnections.secrets.hashicorp.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/vaultconnections.secrets.hashicorp.com configured
Warning: resource customresourcedefinitions/vaultdynamicsecrets.secrets.hashicorp.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/vaultdynamicsecrets.secrets.hashicorp.com configured
Warning: resource customresourcedefinitions/vaultpkisecrets.secrets.hashicorp.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/vaultpkisecrets.secrets.hashicorp.com configured
Warning: resource customresourcedefinitions/vaultstaticsecrets.secrets.hashicorp.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/vaultstaticsecrets.secrets.hashicorp.com configured
``` | vault | layout docs page title Vault Secrets Operator Installation description The Vault Secrets Operator can be installed using Helm include vso common links mdx Installing and upgrading the Vault Secrets Operator Prerequisites A Kubernetes cluster running 1 23 Helm 3 7 Optional Kustomize 4 5 7 Installation using Helm Install Helm https helm sh docs intro install before beginning The Helm chart helm is the recommended way of installing and configuring the Vault Secrets Operator To install a new instance of the Vault Secrets Operator first add the HashiCorp Helm repository and ensure you have access to the chart shell session helm repo add hashicorp https helm releases hashicorp com hashicorp has been added to your repositories shell session helm search repo hashicorp vault secrets operator NAME CHART VERSION APP VERSION DESCRIPTION hashicorp vault secrets operator 0 9 0 0 9 0 Official HashiCorp Vault Secrets Operator Chart Then install the Operator shell session helm install version 0 9 0 create namespace namespace vault secrets operator vault secrets operator hashicorp vault secrets operator Upgrading using Helm You can upgrade an existing installation with the helm upgrade command Please always run Helm with the dry run option before any install or upgrade to verify changes Update the hashicorp Helm repo shell session helm repo update hashicorp Hang tight while we grab the latest from your chart repositories Successfully got an update from the hashicorp chart repository Update Complete Happy Helming Updating CRDs when using Helm Note title Important As of VSO 0 8 0 VSO will automatically update its CRDs The manual upgrade step Updating CRDs updating crds when using helm prior to vso 0 8 0 below is no longer required when upgrading to VSO 0 8 0 Note The VSO Helm chart will automatically upgrade the CRDs to match the VSO version being deployed There should be no need to manually update the CRDs prior to upgrading VSO using Helm Chart values Refer to the Helm chart helm overview for a full list of supported chart values Installation using Kustomize You can install and update your installation using kustomize which allows you to extend the config path of the VSO repository using Kustomize primitives To install using Kustomize download and untar unzip the latest release from the Releases Page https github com hashicorp vault secrets operator releases shell session wget q https github com hashicorp vault secrets operator archive refs tags v0 9 0 tar gz tar zxf v0 9 0 tar gz cd vault secrets operator 0 9 0 Next install using kustomize build shell session kustomize build config default kubectl apply f namespace vault secrets operator system created customresourcedefinition apiextensions k8s io hcpauths secrets hashicorp com created customresourcedefinition apiextensions k8s io hcpvaultsecretsapps secrets hashicorp com created customresourcedefinition apiextensions k8s io vaultauths secrets hashicorp com created customresourcedefinition apiextensions k8s io vaultconnections secrets hashicorp com created customresourcedefinition apiextensions k8s io vaultdynamicsecrets secrets hashicorp com created customresourcedefinition apiextensions k8s io vaultpkisecrets secrets hashicorp com created customresourcedefinition apiextensions k8s io vaultstaticsecrets secrets hashicorp com created serviceaccount vault secrets operator controller manager created role rbac authorization k8s io vault secrets operator leader election role created clusterrole rbac authorization k8s io vault secrets operator manager role created clusterrole rbac authorization k8s io vault secrets operator metrics reader created clusterrole rbac authorization k8s io vault secrets operator proxy role created rolebinding rbac authorization k8s io vault secrets operator leader election rolebinding created clusterrolebinding rbac authorization k8s io vault secrets operator manager rolebinding created clusterrolebinding rbac authorization k8s io vault secrets operator proxy rolebinding created configmap vault secrets operator manager config created service vault secrets operator controller manager metrics service created deployment apps vault secrets operator controller manager created Confirm the operator has been installed by examining the pods shell session kubectl get pods n vault secrets operator system NAMESPACE NAME READY STATUS RESTARTS AGE vault secrets operator system vault secrets operator controller manager 56754d5496 cq69s 2 2 Running 0 1m17s Note title Kustomize does not support all features of the Helm chart Notably it will not deploy default VaultAuthMethod VaultConnection or Transit related resources Kustomize also does not support pre delete hooks that the Helm chart uses to cleanup resources and remove finalizers on the uninstall path Please see config samples https github com hashicorp vault secrets operator tree main config samples or config samples in the downloaded release artifacts for additional resources Note Upgrade using Kustomize Upgrading using Kustomize is similar to installation simply download the new release from github and follow the same steps as outlined in Installation using Kustomize installation using kustomize No additional steps are required to update the CRDs Legacy notes The following notes provide guidance for installing upgrading older versions of VSO Updating CRDs when using Helm prior to VSO 0 8 0 This step can be skipped if you are upgrading to VSO 0 8 0 or later Note title Helm does not automatically update CRDs You must update all CRDs manually before upgrading VSO to a version prior to 0 8 0 Note You must update the CRDs for VSO manually before you upgrade the operator when the operator is managed by Helm Any kubectl warnings related to last applied configuration should be safe to ignore To update the VSO CRDs replace TARGET VSO VERSION with the VSO version you are upgrading to shell session helm show crds version TARGET VSO VERSION hashicorp vault secrets operator kubectl apply f For example if you are upgrading to VSO 0 7 1 shell session helm show crds version 0 7 1 hashicorp vault secrets operator kubectl apply f customresourcedefinition apiextensions k8s io hcpauths secrets hashicorp com created customresourcedefinition apiextensions k8s io hcpvaultsecretsapps secrets hashicorp com created Warning resource customresourcedefinitions vaultauths secrets hashicorp com is missing the kubectl kubernetes io last applied configuration annotation which is required by kubectl apply kubectl apply should only be used on resources created declaratively by either kubectl create save config or kubectl apply The missing annotation will be patched automatically customresourcedefinition apiextensions k8s io vaultauths secrets hashicorp com configured Warning resource customresourcedefinitions vaultconnections secrets hashicorp com is missing the kubectl kubernetes io last applied configuration annotation which is required by kubectl apply kubectl apply should only be used on resources created declaratively by either kubectl create save config or kubectl apply The missing annotation will be patched automatically customresourcedefinition apiextensions k8s io vaultconnections secrets hashicorp com configured Warning resource customresourcedefinitions vaultdynamicsecrets secrets hashicorp com is missing the kubectl kubernetes io last applied configuration annotation which is required by kubectl apply kubectl apply should only be used on resources created declaratively by either kubectl create save config or kubectl apply The missing annotation will be patched automatically customresourcedefinition apiextensions k8s io vaultdynamicsecrets secrets hashicorp com configured Warning resource customresourcedefinitions vaultpkisecrets secrets hashicorp com is missing the kubectl kubernetes io last applied configuration annotation which is required by kubectl apply kubectl apply should only be used on resources created declaratively by either kubectl create save config or kubectl apply The missing annotation will be patched automatically customresourcedefinition apiextensions k8s io vaultpkisecrets secrets hashicorp com configured Warning resource customresourcedefinitions vaultstaticsecrets secrets hashicorp com is missing the kubectl kubernetes io last applied configuration annotation which is required by kubectl apply kubectl apply should only be used on resources created declaratively by either kubectl create save config or kubectl apply The missing annotation will be patched automatically customresourcedefinition apiextensions k8s io vaultstaticsecrets secrets hashicorp com configured |
vault HCP Vault Secrets source The Vault Secrets Operator allows Pods to consume HCP Vault Secrets natively from Kubernetes Secrets layout docs page title Vault Secrets Operator with HCP Vault Secrets Overview | ---
layout: docs
page_title: Vault Secrets Operator with HCP Vault Secrets
description: >-
The Vault Secrets Operator allows Pods to consume HCP Vault Secrets natively from Kubernetes Secrets.
---
# HCP Vault Secrets source
## Overview
The Vault secrets operator (VSO) syncs your [HCP Vault Secrets app](/hcp/docs/vault-secrets) (HVSA) to
a Kubernetes Secret. Vault syncs each `HCPVaultSecretsApp` custom resource periodically to ensure that
changes to the secret source are properly reflected in the Kubernetes secret.
## Features
- Periodic synchronization of HCP Vault Secrets app to a *destination* Kubernetes Secret.
- Automatic drift detection and remediation when the destination Kubernetes Secret
is modified or deleted.
- Supports all VSO features, including rollout-restarts on secret rotation or
during drift remediation.
- Supports authentication to HCP using [HCP service principals](/hcp/docs/hcp/admin/iam/service-principals).
- Supports [static](#static-secrets), [auto-rotating and dynamic secrets](#auto-rotating-and-dynamic-secrets)
within an HCP Vault Secrets app.
### Supported HCP authentication methods
| Backend | Description |
|----------------------------------------------------------------------|--------------------------------------------------------|
| [HCP Service Principals](/hcp/docs/hcp/admin/iam/service-principals) | Relies on static credentials for authenticating to HCP |
### HCP Vault Secrets sync example
The following Kubernetes configuration can be used to sync the HCP Vault Secrets app, `vso-example`,
to the Kubernetes Secret, `vso-app-secret`, in the `vso-example-ns` Kubernetes Namespace. It assumes that
you have already setup [service principal Kubernetes secret](/vault/docs/platform/k8s/vso/api-reference#hcpauthserviceprincipal),
and have created the HCP Vault Secrets app.
Use the following Kubernetes configuration to sync your HCP Vault Secrets app, `vso-example`,
to the Kubernetes secret, `vso-app-secret`, in the `vso-example-ns` Kubernetes namespace.
The example configuration assumes you already a HCP Vault Secrets app created and have your
[service principal Kubernetes secret](/vault/docs/platform/k8s/vso/api-reference#hcpauthserviceprincipal)
configured.
Refer to the [Kubernetes VSO installation guide](/vault/docs/platform/k8s/vso/installation)
before applying any of the example configurations below.
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: HCPAuth
metadata:
name: hcp-auth
namespace: vso-example-ns
spec:
organizationID: xxxxxxxx-76e9-4e17-b5e9-xxxxxxxx4c33
projectID: xxxxxxxx-bd16-443f-a266-xxxxxxxxcb52
servicePrincipal:
secretRef: vso-app-sp
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: HCPVaultSecretsApp
metadata:
name: vso-app
namespace: vso-example-ns
spec:
appName: vso-app
hcpAuthRef: hcp-auth
destination:
create: true
name: vso-app-secret
```
### Static Secrets
VSO supports syncing [static secrets](/hcp/docs/vault-secrets/static-secrets/create-static-secret)
from an HCP Vault Secrets app to a Kubernetes Secret. VSO syncs the secrets to
Kubernetes on the [refreshAfter](/vault/docs/platform/k8s/vso/api-reference#hcpvaultsecretsappspec)
interval set in the HCPVaultSecretsApp spec.
### Auto-rotating and Dynamic Secrets
<Tip title="Feature availability">
VSO v0.9.0
</Tip>
VSO also supports syncing [auto-rotating](/hcp/docs/vault-secrets/auto-rotation)
and [dynamic](/hcp/docs/vault-secrets/dynamic-secrets) secrets from an HCP Vault
Secrets app to a Kubernetes Secret.
VSO syncs auto-rotating secrets along with static secrets on the
[refreshAfter](/vault/docs/platform/k8s/vso/api-reference#hcpvaultsecretsappspec)
interval, and rotation is handled by HCP. VSO syncs dynamic secrets when the
[specified percentage](/vault/docs/platform/k8s/vso/api-reference#hvsdynamicsyncconfig)
of their TTL has elapsed. Each sync of a dynamic secret generates a new set of
credentials.
An auto-rotating or dynamic secret can have multiple key-value pairs, which
are rendered in the destination Kubernetes Secret as both a nested map and
flattened key-value pairs. For example:
```yaml
apiVersion: v1
kind: Secret
data:
secret_name: {"key_one": "value_one", "key_two": "value_two"}
secret_name_key_one: "value_one"
secret_name_key_two: "value_two"
...
```
Transformation [template commands like `get` and `dig`](/vault/docs/platform/k8s/vso/secret-transformation#map-functions)
in the HCPVaultSecretsApp Destination can be used to extract values from the
nested map format:
```yaml
transformation:
templates:
secret_one:
text: ''
secret_two:
text: ''
```
@include 'vso/blurb-api-reference.mdx'
## Tutorial
Refer to the [HCP Vault Secrets with Vault Secrets Operator for
Kubernetes](/vault/tutorials/hcp-vault-secrets-get-started/kubernetes-vso) tutorial to
learn the end-to-end workflow using the Vault Secrets Operator. | vault | layout docs page title Vault Secrets Operator with HCP Vault Secrets description The Vault Secrets Operator allows Pods to consume HCP Vault Secrets natively from Kubernetes Secrets HCP Vault Secrets source Overview The Vault secrets operator VSO syncs your HCP Vault Secrets app hcp docs vault secrets HVSA to a Kubernetes Secret Vault syncs each HCPVaultSecretsApp custom resource periodically to ensure that changes to the secret source are properly reflected in the Kubernetes secret Features Periodic synchronization of HCP Vault Secrets app to a destination Kubernetes Secret Automatic drift detection and remediation when the destination Kubernetes Secret is modified or deleted Supports all VSO features including rollout restarts on secret rotation or during drift remediation Supports authentication to HCP using HCP service principals hcp docs hcp admin iam service principals Supports static static secrets auto rotating and dynamic secrets auto rotating and dynamic secrets within an HCP Vault Secrets app Supported HCP authentication methods Backend Description HCP Service Principals hcp docs hcp admin iam service principals Relies on static credentials for authenticating to HCP HCP Vault Secrets sync example The following Kubernetes configuration can be used to sync the HCP Vault Secrets app vso example to the Kubernetes Secret vso app secret in the vso example ns Kubernetes Namespace It assumes that you have already setup service principal Kubernetes secret vault docs platform k8s vso api reference hcpauthserviceprincipal and have created the HCP Vault Secrets app Use the following Kubernetes configuration to sync your HCP Vault Secrets app vso example to the Kubernetes secret vso app secret in the vso example ns Kubernetes namespace The example configuration assumes you already a HCP Vault Secrets app created and have your service principal Kubernetes secret vault docs platform k8s vso api reference hcpauthserviceprincipal configured Refer to the Kubernetes VSO installation guide vault docs platform k8s vso installation before applying any of the example configurations below yaml apiVersion secrets hashicorp com v1beta1 kind HCPAuth metadata name hcp auth namespace vso example ns spec organizationID xxxxxxxx 76e9 4e17 b5e9 xxxxxxxx4c33 projectID xxxxxxxx bd16 443f a266 xxxxxxxxcb52 servicePrincipal secretRef vso app sp apiVersion secrets hashicorp com v1beta1 kind HCPVaultSecretsApp metadata name vso app namespace vso example ns spec appName vso app hcpAuthRef hcp auth destination create true name vso app secret Static Secrets VSO supports syncing static secrets hcp docs vault secrets static secrets create static secret from an HCP Vault Secrets app to a Kubernetes Secret VSO syncs the secrets to Kubernetes on the refreshAfter vault docs platform k8s vso api reference hcpvaultsecretsappspec interval set in the HCPVaultSecretsApp spec Auto rotating and Dynamic Secrets Tip title Feature availability VSO v0 9 0 Tip VSO also supports syncing auto rotating hcp docs vault secrets auto rotation and dynamic hcp docs vault secrets dynamic secrets secrets from an HCP Vault Secrets app to a Kubernetes Secret VSO syncs auto rotating secrets along with static secrets on the refreshAfter vault docs platform k8s vso api reference hcpvaultsecretsappspec interval and rotation is handled by HCP VSO syncs dynamic secrets when the specified percentage vault docs platform k8s vso api reference hvsdynamicsyncconfig of their TTL has elapsed Each sync of a dynamic secret generates a new set of credentials An auto rotating or dynamic secret can have multiple key value pairs which are rendered in the destination Kubernetes Secret as both a nested map and flattened key value pairs For example yaml apiVersion v1 kind Secret data secret name key one value one key two value two secret name key one value one secret name key two value two Transformation template commands like get and dig vault docs platform k8s vso secret transformation map functions in the HCPVaultSecretsApp Destination can be used to extract values from the nested map format yaml transformation templates secret one text secret two text include vso blurb api reference mdx Tutorial Refer to the HCP Vault Secrets with Vault Secrets Operator for Kubernetes vault tutorials hcp vault secrets get started kubernetes vso tutorial to learn the end to end workflow using the Vault Secrets Operator |
vault The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets page title Vault Secrets Operator include vso common links mdx layout docs Vault Secrets Operator | ---
layout: docs
page_title: Vault Secrets Operator
description: >-
The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets.
---
@include 'vso/common-links.mdx'
# Vault Secrets Operator
The Vault Secrets Operator (VSO) supports Vault as a secret source, which
lets you seamlessly integrate VSO with a Vault instance running on any
platform.
## Supported Vault platform and version
| Platform | Version |
|-------------------------------------------|---------|
| [Vault Enterprise/Community](/vault/docs) | 1.11+ |
| [HCP Vault Dedicated](/hcp/docs/vault) | 1.11+ |
## Features
Vault Secrets Operator supports the following Vault features:
- Sync from multiple instances of Vault.
- All Vault [secret engines](/vault/docs/secrets) supported.
- TLS/mTLS communications with Vault.
- Support for all VSO features, including performing a rollout-restart upon secret rotation or
during drift remediation.
- Cross Vault namespace authentication for Vault Enterprise 1.13+.
- [Encrypted Vault client cache storage](/vault/docs/platform/k8s/vso/sources/vault#vault-client-cache) for improved performance and security.
- [Instant updates](/vault/docs/platform/k8s/vso/sources/vault#instant-updates)
for VaultStaticSecret's with Vault Enterprise 1.16.3+.
### Supported Vault authentication methods
| Backend | Description |
|------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|
| [Kubernetes](/vault/docs/platform/k8s/vso/api-reference#vaultauthconfigkubernetes) | Relies on short-lived Kubernetes ServiceAccount tokens for Vault authentication |
| [JWT](/vault/docs/platform/k8s/vso/api-reference#vaultauthconfigjwt) | Relies on either static JWT tokens or short-lived Kubernetes ServiceAccount tokens for Vault authentication |
| [AppRole](/vault/docs/platform/k8s/vso/api-reference#vaultauthconfigapprole) | Relies on static AppRole credentials for Vault authentication |
| [AWS](/vault/docs/platform/k8s/vso/sources/vault/auth/aws) | Relies on AWS credentials for Vault authentication |
| [GCP](/vault/docs/platform/k8s/vso/sources/vault/auth/gcp) | Relies on GCP credentials for Vault authentication |
## Vault access and custom resource definitions
`VaultConnection` and `VaultAuth` CRDs provide Vault connection and authentication configuration
information for the operator. Consider `VaultConnection` and `VaultAuth` as foundational resources
used by all secret replication type resources.
### VaultConnection custom resource
Provides the required configuration details for connecting to a single Vault server instance.
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultConnection
metadata:
namespace: vso-example
name: vault-connection
spec:
# required configuration
# address to the Vault server.
address: http://vault.vault.svc.cluster.local:8200
# optional configuration
# HTTP headers to be included in all Vault requests.
# headers: []
# TLS server name to use as the SNI host for TLS connections.
# tlsServerName: ""
# skip TLS verification for TLS connections to Vault.
# skipTLSVerify: false
# the trusted PEM encoded CA certificate chain stored in a Kubernetes Secret
# caCertSecretRef: ""
```
### VaultAuth custom resource
Provide the configuration necessary for the Operator to authenticate to a single Vault server instance as
specified in a `VaultConnection` custom resource.
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
namespace: vso-example
name: vault-auth
spec:
# required configuration
# VaultConnectionRef of the corresponding VaultConnection CustomResource.
# If no value is specified the Operator will default to the `default` VaultConnection,
# configured in its own Kubernetes namespace.
vaultConnectionRef: vault-connection
# Method to use when authenticating to Vault.
method: kubernetes
# Mount to use when authenticating to auth method.
mount: kubernetes
# Kubernetes specific auth configuration, requires that the Method be set to kubernetes.
kubernetes:
# role to use when authenticating to Vault
role: example
# ServiceAccount to use when authenticating to Vault
# it is recommended to always provide a unique serviceAccount per Pod/application
serviceAccount: default
# optional configuration
# Vault namespace where the auth backend is mounted (requires Vault Enterprise)
# namespace: ""
# Params to use when authenticating to Vault
# params: []
# HTTP headers to be included in all Vault authentication requests.
# headers: []
```
### VaultAuthGlobal custom resource
<Tip title="Feature availability">
VSO v0.8.0
</Tip>
Namespaced resource that provides shared Vault authentication configuration that can be inherited by multiple
`VaultAuth` custom resources. It supports multiple authentication methods and allows you to define a default
authentication method that can be overridden by individual VaultAuth custom resources. See `vaultAuthGlobalRef` in
the [VaultAuth spec][va-spec] for more details. The `VaultAuthGlobal` custom resource is optional and can be used to
simplify the configuration of multiple VaultAuth custom resources by reducing config duplication. Like other
namespaced VSO custom resources, there can be many VaultAuthGlobal resources configured in a single Kubernetes cluster.
For more details on how to integrate VaultAuthGlobals into your workflow, see the detailed [Authentication][auth]
docs.
<Tip>
The VaultAuthGlobal resources shares many of the same fields as the VaultAuth custom resource, but cannot be used
for authentication directly. It is only used to define shared Vault authentication configuration within a Kubernetes
cluster.
</Tip>
The example below demonstrates how to define a VaultAuthGlobal custom resource with a default authentication method of
`kubernetes`, along with a VaultAuth custom resource that inherits its global configuration.
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuthGlobal
metadata:
namespace: vso-example
name: vault-auth-global
spec:
defaultAuthMethod: kubernetes
kubernetes:
audiences:
- vault
mount: kubernetes
namespace: example-ns
role: auth-role
serviceAccount: default
tokenExpirationSeconds: 600
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
namespace: vso-example
name: vault-auth
spec:
vaultAuthGlobalRef:
name: vault-auth-global
kubernetes:
role: local-role
```
#### Explanation
- The VaultAuthGlobal custom resource defines a default authentication method of kubernetes with the `defaultAuthMethod`
field.
- The VaultAuth custom resource inherits the global configuration by referencing the VaultAuthGlobal custom
resource with the `vaultAuthGlobalRef` field.
- The `kubernetes.role` field in the VaultAuth custom resource spec overrides the value of the corresponding field in
the VaultAuthGlobal custom resource. All other fields are inherited from the VaultAuthGlobal custom resource
`spec.kubernetes` field, e.g., `audiences`, `mount`, `serviceAccount`, `namespace`, etc.
## Vault secret custom resource definitions
Provide the configuration necessary for the Operator to replicate a single Vault Secret to a single Kubernetes Secret.
Each supported CRD is specialized to a *class* of Vault secret, documented below.
### VaultStaticSecret custom resource
Provides the configuration necessary for the Operator to synchronize a single Vault *static* Secret to a single Kubernetes Secret.<br />
Supported secrets engines: [kv-v2](/vault/docs/secrets/kv/kv-v2), [kv-v1](/vault/docs/secrets/kv/kv-v1)
##### KV version 1 secret example
The KV secrets engine's `kvv1` mount path is specified under `spec.mount` of `VaultStaticSecret` custom resource. Please consult [KV Secrets Engine - Version 1 - Setup](/vault/docs/secrets/kv/kv-v1#setup) for configuring KV secrets engine version 1. The following results in a request to `http://127.0.0.1:8200/v1/kvv1/eng/apikey/google` to retrieve the secret.
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
namespace: vso-example
name: vault-static-secret-v1
spec:
vaultAuthRef: vault-auth
mount: kvv1
type: kv-v1
path: eng/apikey/google
refreshAfter: 60s
destination:
create: true
name: static-secret1
```
##### KV version 2 secret example
Set the KV secrets engine (`kvv2`) mount path with the `spec.mount` parameter of
your `VaultStaticSecret` custom resource. For more advanced KV secrets engine
version 2 configuration options, consult the
[KV Secrets Engine - Version 2 - Setup](/vault/docs/secrets/kv/kv-v2#setup)
guide.
For example, to send requests to `http://127.0.0.1:8200/v1/kvv2/eng/apikey/google`
to retrieve secrets:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
namespace: vso-example
name: vault-static-secret-v2
spec:
vaultAuthRef: vault-auth
mount: kvv2
type: kv-v2
path: eng/apikey/google
version: 2
refreshAfter: 60s
destination:
create: true
name: static-secret2
```
### VaultPKISecret custom resource
Provides the configuration necessary for the Operator to synchronize a single Vault *PKI* Secret to a single Kubernetes Secret.<br />
Supported secrets engines: [pki](/vault/docs/secrets/pki)
The PKI secrets engine's mount path is specified under `spec.mount` of `VaultPKISecret` custom resource. Please consult [PKI Secrets Engine - Setup and Usage](/vault/docs/secrets/pki/setup) for configuring PKI secrets engine. The following results in a request to `http://127.0.0.1:8200/v1/pki/issue/default` to generate TLS certificates.
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultPKISecret
metadata:
namespace: vso-example
name: vault-pki
spec:
vaultAuthRef: vault-auth
mount: pki
role: default
commonName: example.com
format: pem
expiryOffset: 1s
ttl: 60s
namespace: tenant-1
destination:
create: true
name: pki1
```
### VaultDynamicSecret custom resource
Provides the configuration necessary for the Operator to synchronize a single Vault *dynamic* Secret to a single Kubernetes Secret.<br />
Supported secrets engines *non-exhaustive*: [databases](/vault/docs/secrets/databases), [aws](/vault/docs/secrets/aws),
[azure](/vault/docs/secrets/azure), [gcp](/vault/docs/secrets/gcp), ...
##### Database secret example
Set the database secret engine mount path (`db`) with the `spec.mount` of your
`VaultDynamicSecret` custom resource. For more advanced database secrets engine
configuration options, consult the
[Database Secrets Engine - Setup](/vault/docs/secrets/databases#setup) guide.
For example, to send requests to
`http://127.0.0.1:8200/v1/db/creds/my-postgresql-role` to generate a new
credential:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
namespace: vso-example
name: vault-dynamic-secret-db
spec:
vaultAuthRef: vault-auth
mount: db
path: creds/my-postgresql-role
destination:
create: true
name: dynamic-db
```
##### AWS secret example
Set the AWS secrets engine mount path (`aws`) with the `spec.mount` parameter of
your `VaultDynamicSecret` custom resource. For more advanced AWS secrets engine
configuration options, consult the
[AWS Secrets Engine - Setup](/vault/docs/secrets/aws#setup) guide.
For example, to send requests to `http://127.0.0.1:8200/v1/aws/creds/my-iam-role`
to generate a new IAM credential:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
namespace: vso-example
name: vault-dynamic-secret-aws-iam
spec:
vaultAuthRef: vault-auth
mount: aws
path: creds/my-iam-role
destination:
create: true
name: dynamic-aws-iam
```
To send requests to `http://127.0.0.1:8200/v1/aws/sts/my-sts-role` to generate a new STS credential:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
namespace: vso-example
name: vault-dynamic-secret-aws-sts
spec:
vaultAuthRef: vault-auth
mount: aws
path: sts/my-sts-role
destination:
create: true
name: dynamic-aws-sts
```
##### HCP Vault Secrets Example
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
namespace: vso-example
name: vault-dynamic-secret-aws-iam-role
spec:
vaultAuthRef: vault-auth
mount: aws
path: creds/my-iam-role
destination:
create: true
name: dynamic-aws-iam-role
```
@include 'vso/blurb-api-reference.mdx'
## Vault client cache
The Vault Secrets Operator can optionally cache Vault client information such as Vault tokens and leases in Kubernetes Secrets within its own namespace. The client cache enables seamless upgrades because Vault tokens and dynamic secret leases can continue to be tracked and renewed through leadership changes. Client cache persistence and encryption is not enabled by default because it requires extra configuration and Vault Server setup. VSO supports encrypting the client cache using Vault Server's [transit secrets engine](/vault/docs/secrets/transit).
The [Encrypted client cache](/vault/docs/platform/k8s/vso/sources/vault/client-cache) guide will walk you through the steps to enable and configure client cache encryption.
## Instant updates <EnterpriseAlert inline="true" />
<Tip title="Feature availability">
VSO v0.8.0
</Tip>
The Vault Secrets Operator can instantly update Kubernetes Secrets when changes
are made in Vault, by subscribing to [Vault Events][vault-events] for change
notification. Setting a refresh interval (e.g. [refreshAfter][vss-spec]) is
still recommended since event message delivery is not guaranteed.
**Supported secret types:**
- [VaultStaticSecret](#vaultstaticsecret-custom-resource) ([kv-v1](/vault/docs/secrets/kv/kv-v1),
[kv-v2](/vault/docs/secrets/kv/kv-v2))
<Note title="Requires Vault Enterprise 1.16.3+">
The instant updates option requires [Vault Enterprise](/vault/docs/enterprise)
1.16.3+ due to the use of [Vault Event Notifications][vault-events].
</Note>
The [Instant updates](/vault/docs/platform/k8s/vso/sources/vault/instant-updates) guide
will walk you through the steps to enable instant updates for a VaultStaticSecret.
[vss-spec]: /vault/docs/platform/k8s/vso/api-reference#vaultstaticsecretspec
[vault-events]: /vault/docs/concepts/events
## Tutorial
Refer to the [Vault Secrets Operator on
Kubernetes](/vault/tutorials/kubernetes/vault-secrets-operator) tutorial to
learn the end-to-end workflow using the Vault Secrets Operator. | vault | layout docs page title Vault Secrets Operator description The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets include vso common links mdx Vault Secrets Operator The Vault Secrets Operator VSO supports Vault as a secret source which lets you seamlessly integrate VSO with a Vault instance running on any platform Supported Vault platform and version Platform Version Vault Enterprise Community vault docs 1 11 HCP Vault Dedicated hcp docs vault 1 11 Features Vault Secrets Operator supports the following Vault features Sync from multiple instances of Vault All Vault secret engines vault docs secrets supported TLS mTLS communications with Vault Support for all VSO features including performing a rollout restart upon secret rotation or during drift remediation Cross Vault namespace authentication for Vault Enterprise 1 13 Encrypted Vault client cache storage vault docs platform k8s vso sources vault vault client cache for improved performance and security Instant updates vault docs platform k8s vso sources vault instant updates for VaultStaticSecret s with Vault Enterprise 1 16 3 Supported Vault authentication methods Backend Description Kubernetes vault docs platform k8s vso api reference vaultauthconfigkubernetes Relies on short lived Kubernetes ServiceAccount tokens for Vault authentication JWT vault docs platform k8s vso api reference vaultauthconfigjwt Relies on either static JWT tokens or short lived Kubernetes ServiceAccount tokens for Vault authentication AppRole vault docs platform k8s vso api reference vaultauthconfigapprole Relies on static AppRole credentials for Vault authentication AWS vault docs platform k8s vso sources vault auth aws Relies on AWS credentials for Vault authentication GCP vault docs platform k8s vso sources vault auth gcp Relies on GCP credentials for Vault authentication Vault access and custom resource definitions VaultConnection and VaultAuth CRDs provide Vault connection and authentication configuration information for the operator Consider VaultConnection and VaultAuth as foundational resources used by all secret replication type resources VaultConnection custom resource Provides the required configuration details for connecting to a single Vault server instance yaml apiVersion secrets hashicorp com v1beta1 kind VaultConnection metadata namespace vso example name vault connection spec required configuration address to the Vault server address http vault vault svc cluster local 8200 optional configuration HTTP headers to be included in all Vault requests headers TLS server name to use as the SNI host for TLS connections tlsServerName skip TLS verification for TLS connections to Vault skipTLSVerify false the trusted PEM encoded CA certificate chain stored in a Kubernetes Secret caCertSecretRef VaultAuth custom resource Provide the configuration necessary for the Operator to authenticate to a single Vault server instance as specified in a VaultConnection custom resource yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata namespace vso example name vault auth spec required configuration VaultConnectionRef of the corresponding VaultConnection CustomResource If no value is specified the Operator will default to the default VaultConnection configured in its own Kubernetes namespace vaultConnectionRef vault connection Method to use when authenticating to Vault method kubernetes Mount to use when authenticating to auth method mount kubernetes Kubernetes specific auth configuration requires that the Method be set to kubernetes kubernetes role to use when authenticating to Vault role example ServiceAccount to use when authenticating to Vault it is recommended to always provide a unique serviceAccount per Pod application serviceAccount default optional configuration Vault namespace where the auth backend is mounted requires Vault Enterprise namespace Params to use when authenticating to Vault params HTTP headers to be included in all Vault authentication requests headers VaultAuthGlobal custom resource Tip title Feature availability VSO v0 8 0 Tip Namespaced resource that provides shared Vault authentication configuration that can be inherited by multiple VaultAuth custom resources It supports multiple authentication methods and allows you to define a default authentication method that can be overridden by individual VaultAuth custom resources See vaultAuthGlobalRef in the VaultAuth spec va spec for more details The VaultAuthGlobal custom resource is optional and can be used to simplify the configuration of multiple VaultAuth custom resources by reducing config duplication Like other namespaced VSO custom resources there can be many VaultAuthGlobal resources configured in a single Kubernetes cluster For more details on how to integrate VaultAuthGlobals into your workflow see the detailed Authentication auth docs Tip The VaultAuthGlobal resources shares many of the same fields as the VaultAuth custom resource but cannot be used for authentication directly It is only used to define shared Vault authentication configuration within a Kubernetes cluster Tip The example below demonstrates how to define a VaultAuthGlobal custom resource with a default authentication method of kubernetes along with a VaultAuth custom resource that inherits its global configuration yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuthGlobal metadata namespace vso example name vault auth global spec defaultAuthMethod kubernetes kubernetes audiences vault mount kubernetes namespace example ns role auth role serviceAccount default tokenExpirationSeconds 600 apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata namespace vso example name vault auth spec vaultAuthGlobalRef name vault auth global kubernetes role local role Explanation The VaultAuthGlobal custom resource defines a default authentication method of kubernetes with the defaultAuthMethod field The VaultAuth custom resource inherits the global configuration by referencing the VaultAuthGlobal custom resource with the vaultAuthGlobalRef field The kubernetes role field in the VaultAuth custom resource spec overrides the value of the corresponding field in the VaultAuthGlobal custom resource All other fields are inherited from the VaultAuthGlobal custom resource spec kubernetes field e g audiences mount serviceAccount namespace etc Vault secret custom resource definitions Provide the configuration necessary for the Operator to replicate a single Vault Secret to a single Kubernetes Secret Each supported CRD is specialized to a class of Vault secret documented below VaultStaticSecret custom resource Provides the configuration necessary for the Operator to synchronize a single Vault static Secret to a single Kubernetes Secret br Supported secrets engines kv v2 vault docs secrets kv kv v2 kv v1 vault docs secrets kv kv v1 KV version 1 secret example The KV secrets engine s kvv1 mount path is specified under spec mount of VaultStaticSecret custom resource Please consult KV Secrets Engine Version 1 Setup vault docs secrets kv kv v1 setup for configuring KV secrets engine version 1 The following results in a request to http 127 0 0 1 8200 v1 kvv1 eng apikey google to retrieve the secret yaml apiVersion secrets hashicorp com v1beta1 kind VaultStaticSecret metadata namespace vso example name vault static secret v1 spec vaultAuthRef vault auth mount kvv1 type kv v1 path eng apikey google refreshAfter 60s destination create true name static secret1 KV version 2 secret example Set the KV secrets engine kvv2 mount path with the spec mount parameter of your VaultStaticSecret custom resource For more advanced KV secrets engine version 2 configuration options consult the KV Secrets Engine Version 2 Setup vault docs secrets kv kv v2 setup guide For example to send requests to http 127 0 0 1 8200 v1 kvv2 eng apikey google to retrieve secrets yaml apiVersion secrets hashicorp com v1beta1 kind VaultStaticSecret metadata namespace vso example name vault static secret v2 spec vaultAuthRef vault auth mount kvv2 type kv v2 path eng apikey google version 2 refreshAfter 60s destination create true name static secret2 VaultPKISecret custom resource Provides the configuration necessary for the Operator to synchronize a single Vault PKI Secret to a single Kubernetes Secret br Supported secrets engines pki vault docs secrets pki The PKI secrets engine s mount path is specified under spec mount of VaultPKISecret custom resource Please consult PKI Secrets Engine Setup and Usage vault docs secrets pki setup for configuring PKI secrets engine The following results in a request to http 127 0 0 1 8200 v1 pki issue default to generate TLS certificates yaml apiVersion secrets hashicorp com v1beta1 kind VaultPKISecret metadata namespace vso example name vault pki spec vaultAuthRef vault auth mount pki role default commonName example com format pem expiryOffset 1s ttl 60s namespace tenant 1 destination create true name pki1 VaultDynamicSecret custom resource Provides the configuration necessary for the Operator to synchronize a single Vault dynamic Secret to a single Kubernetes Secret br Supported secrets engines non exhaustive databases vault docs secrets databases aws vault docs secrets aws azure vault docs secrets azure gcp vault docs secrets gcp Database secret example Set the database secret engine mount path db with the spec mount of your VaultDynamicSecret custom resource For more advanced database secrets engine configuration options consult the Database Secrets Engine Setup vault docs secrets databases setup guide For example to send requests to http 127 0 0 1 8200 v1 db creds my postgresql role to generate a new credential yaml apiVersion secrets hashicorp com v1beta1 kind VaultDynamicSecret metadata namespace vso example name vault dynamic secret db spec vaultAuthRef vault auth mount db path creds my postgresql role destination create true name dynamic db AWS secret example Set the AWS secrets engine mount path aws with the spec mount parameter of your VaultDynamicSecret custom resource For more advanced AWS secrets engine configuration options consult the AWS Secrets Engine Setup vault docs secrets aws setup guide For example to send requests to http 127 0 0 1 8200 v1 aws creds my iam role to generate a new IAM credential yaml apiVersion secrets hashicorp com v1beta1 kind VaultDynamicSecret metadata namespace vso example name vault dynamic secret aws iam spec vaultAuthRef vault auth mount aws path creds my iam role destination create true name dynamic aws iam To send requests to http 127 0 0 1 8200 v1 aws sts my sts role to generate a new STS credential yaml apiVersion secrets hashicorp com v1beta1 kind VaultDynamicSecret metadata namespace vso example name vault dynamic secret aws sts spec vaultAuthRef vault auth mount aws path sts my sts role destination create true name dynamic aws sts HCP Vault Secrets Example yaml apiVersion secrets hashicorp com v1beta1 kind VaultDynamicSecret metadata namespace vso example name vault dynamic secret aws iam role spec vaultAuthRef vault auth mount aws path creds my iam role destination create true name dynamic aws iam role include vso blurb api reference mdx Vault client cache The Vault Secrets Operator can optionally cache Vault client information such as Vault tokens and leases in Kubernetes Secrets within its own namespace The client cache enables seamless upgrades because Vault tokens and dynamic secret leases can continue to be tracked and renewed through leadership changes Client cache persistence and encryption is not enabled by default because it requires extra configuration and Vault Server setup VSO supports encrypting the client cache using Vault Server s transit secrets engine vault docs secrets transit The Encrypted client cache vault docs platform k8s vso sources vault client cache guide will walk you through the steps to enable and configure client cache encryption Instant updates EnterpriseAlert inline true Tip title Feature availability VSO v0 8 0 Tip The Vault Secrets Operator can instantly update Kubernetes Secrets when changes are made in Vault by subscribing to Vault Events vault events for change notification Setting a refresh interval e g refreshAfter vss spec is still recommended since event message delivery is not guaranteed Supported secret types VaultStaticSecret vaultstaticsecret custom resource kv v1 vault docs secrets kv kv v1 kv v2 vault docs secrets kv kv v2 Note title Requires Vault Enterprise 1 16 3 The instant updates option requires Vault Enterprise vault docs enterprise 1 16 3 due to the use of Vault Event Notifications vault events Note The Instant updates vault docs platform k8s vso sources vault instant updates guide will walk you through the steps to enable instant updates for a VaultStaticSecret vss spec vault docs platform k8s vso api reference vaultstaticsecretspec vault events vault docs concepts events Tutorial Refer to the Vault Secrets Operator on Kubernetes vault tutorials kubernetes vault secrets operator tutorial to learn the end to end workflow using the Vault Secrets Operator |
vault Instant updates for a VaultStaticSecret page title Instant updates with Vault Secrets Operator Vault Secrets Operator VSO supports instant updates for layout docs Enable instant updates with Vault Secrets Operator | ---
layout: docs
page_title: Instant updates with Vault Secrets Operator
description: >-
Enable instant updates with Vault Secrets Operator.
---
# Instant updates for a VaultStaticSecret
Vault Secrets Operator (VSO) supports instant updates for
[VaultStaticSecrets][vss-spec] by subscribing to event notifications from Vault.
## Before you start
- **You must have [Vault Secrets Operator](/vault/docs/platform/k8s/vso/sources/vault) installed**.
- **You must use [Vault Enterprise](/vault/docs/enterprise) version 1.16.3 or later**.
## Step 1: Set event permissions
Grant these permissions in the policy associated with the VaultAuth role:
```hcl
path "<kv mount>/<kv secret path>" {
capabilities = ["read", "list", "subscribe"]
subscribe_event_types = ["*"]
}
path "sys/events/subscribe/kv*" {
capabilities = ["read"]
}
```
<Tip>
See [Event Notifications Policies][events-policies] for more information on
Vault event notification permissions.
</Tip>
## Step 2: Enable instant updates on the VaultStaticSecret
Set `syncConfig.instantUpdates=true` in the [VaultStaticSecret spec][vss-spec]:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
namespace: vso-example
name: vault-static-secret-v2
spec:
vaultAuthRef: vault-auth
mount: <kv mount>
type: kv-v2
path: <kv secret path>
version: 2
refreshAfter: 1h
destination:
create: true
name: static-secret2
syncConfig:
instantUpdates: true
```
## Debugging
Check Kubernetes events on the VaultStaticSecret resource to see if VSO
subscribed to Vault event notifications.
### Example: VSO is subscribed to Vault event notifications for the secret
```shell-session
$ kubectl describe vaultstaticsecret vault-static-secret-v2 -n vso-example
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SecretSynced 2s VaultStaticSecret Secret synced
Normal EventWatcherStarted 2s (x2 over 2s) VaultStaticSecret Started watching events
Normal SecretRotated 2s VaultStaticSecret Secret synced
```
### Example: The VaultAuth role policy lacks the required event permissions
```shell-session
$ kubectl describe vaultstaticsecret vault-static-secret-v2 -n vso-example
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SecretSynced 2s VaultStaticSecret Secret synced
Warning EventWatcherError 2s VaultStaticSecret Error while watching events:
failed to connect to vault websocket: error returned when opening event stream
web socket to wss://vault.vault.svc.cluster.local:8200/v1/sys/events/subscribe/kv%2A?json=true,
ensure VaultAuth role has correct permissions and Vault is Enterprise version
1.16 or above: {"errors":["1 error occurred:\n\t* permission denied\n\n"]}
Normal SecretRotated 2s VaultStaticSecret Secret synced
```
[vss-spec]: /vault/docs/platform/k8s/vso/api-reference#vaultstaticsecretspec
[vault-events]: /vault/docs/concepts/events
[events-policies]: /vault/docs/concepts/events#policies | vault | layout docs page title Instant updates with Vault Secrets Operator description Enable instant updates with Vault Secrets Operator Instant updates for a VaultStaticSecret Vault Secrets Operator VSO supports instant updates for VaultStaticSecrets vss spec by subscribing to event notifications from Vault Before you start You must have Vault Secrets Operator vault docs platform k8s vso sources vault installed You must use Vault Enterprise vault docs enterprise version 1 16 3 or later Step 1 Set event permissions Grant these permissions in the policy associated with the VaultAuth role hcl path kv mount kv secret path capabilities read list subscribe subscribe event types path sys events subscribe kv capabilities read Tip See Event Notifications Policies events policies for more information on Vault event notification permissions Tip Step 2 Enable instant updates on the VaultStaticSecret Set syncConfig instantUpdates true in the VaultStaticSecret spec vss spec yaml apiVersion secrets hashicorp com v1beta1 kind VaultStaticSecret metadata namespace vso example name vault static secret v2 spec vaultAuthRef vault auth mount kv mount type kv v2 path kv secret path version 2 refreshAfter 1h destination create true name static secret2 syncConfig instantUpdates true Debugging Check Kubernetes events on the VaultStaticSecret resource to see if VSO subscribed to Vault event notifications Example VSO is subscribed to Vault event notifications for the secret shell session kubectl describe vaultstaticsecret vault static secret v2 n vso example Events Type Reason Age From Message Normal SecretSynced 2s VaultStaticSecret Secret synced Normal EventWatcherStarted 2s x2 over 2s VaultStaticSecret Started watching events Normal SecretRotated 2s VaultStaticSecret Secret synced Example The VaultAuth role policy lacks the required event permissions shell session kubectl describe vaultstaticsecret vault static secret v2 n vso example Events Type Reason Age From Message Normal SecretSynced 2s VaultStaticSecret Secret synced Warning EventWatcherError 2s VaultStaticSecret Error while watching events failed to connect to vault websocket error returned when opening event stream web socket to wss vault vault svc cluster local 8200 v1 sys events subscribe kv 2A json true ensure VaultAuth role has correct permissions and Vault is Enterprise version 1 16 or above errors 1 error occurred n t permission denied n n Normal SecretRotated 2s VaultStaticSecret Secret synced vss spec vault docs platform k8s vso api reference vaultstaticsecretspec vault events vault docs concepts events events policies vault docs concepts events policies |
vault Persist and encrypt the Vault client cache Enable and encrypt the Vault client cache for dynamic secrets with Vault layout docs Secrets Operator page title Persist and encrypt the Vault client cache | ---
layout: docs
page_title: Persist and encrypt the Vault client cache
description: >-
Enable and encrypt the Vault client cache for dynamic secrets with Vault
Secrets Operator.
---
# Persist and encrypt the Vault client cache
By default, the [Vault client cache](/vault/docs/platform/k8s/vso/sources/vault#vault-client-cache) does not persist. You can use the
[transit secrets engine](/vault/docs/secrets/transit) with Vault Secrets Operator (VSO)
to store and encrypt the client cache in your Vault server.
<Highlight title="Dynamic secrets best practice">
We strongly recommend persisting and encrypting the client cache if you use
[Vault dynamic secrets](/vault/docs/platform/k8s/vso/api-reference#vaultdynamicsecret),
so that dynamic secret leases are maintained through restarts and upgrades.
</Highlight>
## Before you start
- **You must have [Vault Secrets Operator](/vault/docs/platform/k8s/vso/sources/vault) installed**.
- **You must have the [`transit` secrets engine](/vault/docs/secrets/transit) enabled**.
- **You must have the [`kubernetes` authentication engine](/vault/docs/auth/kubernetes) enabled**.
## Step 1: Configure a key and policy for VSO
Use the Vault CLI or Terraform to add a key to `transit` and define policies
for encrypting and decrypting cache information:
<CodeTabs>
<CodeBlockConfig>
```shell-session
export VAULT_NAMESPACE=<VAULT_NAMESPACE>
export VAULT_TRANSIT_PATH=<VAULT_TRANSIT_PATH>
vault write -f ${VAULT_TRANSIT_PATH}/keys/vso-client-cache
vault policy write operator - <<EOH
path "${VAULT_TRANSIT_PATH}/encrypt/vso-client-cache" {
capabilities = ["create", "update"]
}
path "${VAULT_TRANSIT_PATH}/decrypt/vso-client-cache" {
capabilities = ["create", "update"]
}
EOH
```
</CodeBlockConfig>
<CodeBlockConfig>
```hcl
locals {
transit_path = "<VAULT_TRANSIT_PATH>"
transit_namespace = "<VAULT_NAMESPACE>"
}
resource "vault_transit_secret_cache_config" "cache" {
namespace = local.transit_namespace
backend = local.transit_path
size = 500
}
resource "vault_transit_secret_backend_key" "cache" {
namespace = local.transit_namespace
backend = local.transit_path
name = "vso-client-cache"
deletion_allowed = true
}
data "vault_policy_document" "operator_transit" {
rule {
path = "${local.transit_path}/encrypt/${vault_transit_secret_backend_key.cache.name}"
capabilities = ["create", "update"]
description = "encrypt"
}
rule {
path = "${local.transit_path}/decrypt/${vault_transit_secret_backend_key.cache.name}"
capabilities = ["create", "update"]
description = "decrypt"
}
}
resource "vault_policy" "operator" {
namespace = vault_transit_secret_backend_key.cache.namespace
name = "operator"
policy = data.vault_policy_document.operator_transit.hcl
}
```
</CodeBlockConfig>
</CodeTabs>
## Step 2: Create a kubernetes authentication role
Use the Vault CLI or Terraform to create a Kubernetes authentication role for
Vault Secrets Operator.
<CodeTabs>
<CodeBlockConfig>
```shell-session
export VAULT_NAMESPACE=<VAULT_NAMESPACE>
vault write auth/<VAULT_KUBERNETES_PATH>/role/operator \
bound_service_account_names=vault-secrets-operator-controller-manager \
bound_service_account_namespaces=vault-secrets-operator \
token_period="24h" \
token_policies=operator \
audience="vault"
```
</CodeBlockConfig>
<CodeBlockConfig>
```hcl
data "vault_auth_backend" "kubernetes" {
namespace = "<VAULT_NAMESPACE>"
path = "<VAULT_KUBERNETES_PATH>"
}
resource "vault_kubernetes_auth_backend_config" "local" {
namespace = data.vault_auth_backend.kubernetes.namespace
backend = data.vault_auth_backend.kubernetes.path
kubernetes_host = "https://kubernetes.default.svc"
}
resource "vault_kubernetes_auth_backend_role" "operator" {
namespace = data.vault_auth_backend.kubernetes.namespace
backend = vault_kubernetes_auth_backend_config.local.backend
role_name = "operator"
bound_service_account_names = ["vault-secrets-operator-controller-manager"]
bound_service_account_namespaces = ["vault-secrets-operator"]
token_period = 120
token_policies = [
vault_policy.operator.name,
]
audience = "vault"
}
```
</CodeBlockConfig>
</CodeTabs>
## Step 3: Configure a Vault connection for VSO
Use the Vault Secrets Operator API to add a
[VaultConnection](/vault/docs/platform/k8s/vso/api-reference#vaultconnection)
between VSO and your Vault server.
<Note>If you already have a connection for VSO, continue to the next step</Note>
```yaml
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultConnection
metadata:
name: local-vault-server
namespace: vault-secrets-operator
spec:
address: 'https://vault.vault.svc.cluster.local:8200'
```
## Step 4: Enable encrypted client cache storage
<Tabs>
<Tab heading="Helm">
For [Helm installs](/vault/docs/platform/k8s/vso/installation#installation-using-helm),
install (or upgrade) your [`server.clientCache`](/vault/docs/platform/k8s/vso/helm#v-controller-manager-clientcache)
configuration:
```yaml
controller:
manager:
clientCache:
persistenceModel: direct-encrypted
storageEncryption:
enabled: true
vaultConnectionRef: local-vault-server
keyName: vso-client-cache
transitMount: <VAULT_TRANSIT_PATH>
namespace: <VAULT_NAMESPACE>
method: kubernetes
mount: <VAULT_KUBERNETES_PATH>
kubernetes:
role: operator
serviceAccount: vault-secrets-operator-controller-manager
tokenAudiences: ["vault"]
```
</Tab>
<Tab heading="OLM/OperatorHub">
For [OpenShift OperatorHub](/vault/docs/platform/k8s/vso/openshift#operatorhub)
installs:
1. Add a [VaultAuth](/vault/docs/platform/k8s/vso/api-reference#vaultauth) to
entry to your cluster for storage:
- set `cacheStorageEncryption` to `true`
- add a [spec.storageEncryption](/vault/docs/platform/k8s/vso/api-reference#vaultauthspec)
configuration.
```yaml
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: operator
namespace: vault-secrets-operator
labels:
cacheStorageEncryption: 'true'
spec:
kubernetes:
role: operator
serviceAccount: vault-secrets-operator-controller-manager
tokenExpirationSeconds: 600
audiences: ["vault"]
method: kubernetes
mount: <VAULT_KUBERNETES_PATH>
namespace: <VAULT_NAMESPACE>
storageEncryption:
keyName: vso-client-cache
mount: <VAULT_TRANSIT_PATH>
vaultConnectionRef: local-vault-server
```
1. Set the `VSO_CLIENT_CACHE_PERSISTENCE_MODEL` environment variable in VSO's
subscription:
<CodeBlockConfig highlight="6-7">
```yaml
spec:
name: vault-secrets-operator
channel: stable
config:
env:
- name: VSO_CLIENT_CACHE_PERSISTENCE_MODEL
value: direct-encrypted
```
</CodeBlockConfig>
With the operator installed through OperatorHub, edit your subscription and
set the `VSO_CLIENT_CACHE_PERSISTENCE_MODEL` environment variable.
<Tabs>
<Tab heading="Web Console">
<ol>
<li>Navigate to the <b>Operators</b> menu</li>
<li>Select <b>Installed Operators</b></li>
<li>Select "Vault Secrets Operator"</li>
<li>Click "Edit Subscription" in the top right action menu</li>
</ol>
</Tab>
<Tab heading="CLI">
```shell-session
oc edit subscription \
vault-secrets-operator \
-n vault-secrets-operator
```
</Tab>
</Tabs>
The pod in the operator deployment restarts once the
`VSO_CLIENT_CACHE_PERSISTENCE_MODEL` environment variable persists.
</Tab>
</Tabs>
## Optional: Verify client cache storage and encryption
1. Confirm the Vault Secrets Operator logs the following information on startup:
<CodeBlockConfig hideClipboard>
```json
Starting manager {"clientCachePersistenceModel": "direct-encrypted",
"clientCacheSize": 10000}
```
</CodeBlockConfig>
1. Confirm the Vault Secrets Operator logs a "Setting up Vault Client for
storage encryption" message when authenticating to Vault on behalf of a user:
<CodeBlockConfig hideClipboard>
```json
{"level":"info","ts":"2024-02-22T00:41:46Z","logger":"clientCacheFactory",
"msg":"Setting up Vault Client for storage encryption","persist":true,
"enforceEncryption":true,"cacheKey":"kubernetes-59ebf88ccb963a22226bad"}
```
</CodeBlockConfig>
1. Verify the encrypted cache is stored as Kubernetes secrets under the correct
namespace with the prefix `vso-cc-<auth method>`. For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ kubectl get secrets -n vault-secrets-operator
...
NAME TYPE DATA AGE
vso-cc-kubernetes-0147431c618992b6adfed1 Opaque 2 73s
...
```
</CodeBlockConfig> | vault | layout docs page title Persist and encrypt the Vault client cache description Enable and encrypt the Vault client cache for dynamic secrets with Vault Secrets Operator Persist and encrypt the Vault client cache By default the Vault client cache vault docs platform k8s vso sources vault vault client cache does not persist You can use the transit secrets engine vault docs secrets transit with Vault Secrets Operator VSO to store and encrypt the client cache in your Vault server Highlight title Dynamic secrets best practice We strongly recommend persisting and encrypting the client cache if you use Vault dynamic secrets vault docs platform k8s vso api reference vaultdynamicsecret so that dynamic secret leases are maintained through restarts and upgrades Highlight Before you start You must have Vault Secrets Operator vault docs platform k8s vso sources vault installed You must have the transit secrets engine vault docs secrets transit enabled You must have the kubernetes authentication engine vault docs auth kubernetes enabled Step 1 Configure a key and policy for VSO Use the Vault CLI or Terraform to add a key to transit and define policies for encrypting and decrypting cache information CodeTabs CodeBlockConfig shell session export VAULT NAMESPACE VAULT NAMESPACE export VAULT TRANSIT PATH VAULT TRANSIT PATH vault write f VAULT TRANSIT PATH keys vso client cache vault policy write operator EOH path VAULT TRANSIT PATH encrypt vso client cache capabilities create update path VAULT TRANSIT PATH decrypt vso client cache capabilities create update EOH CodeBlockConfig CodeBlockConfig hcl locals transit path VAULT TRANSIT PATH transit namespace VAULT NAMESPACE resource vault transit secret cache config cache namespace local transit namespace backend local transit path size 500 resource vault transit secret backend key cache namespace local transit namespace backend local transit path name vso client cache deletion allowed true data vault policy document operator transit rule path local transit path encrypt vault transit secret backend key cache name capabilities create update description encrypt rule path local transit path decrypt vault transit secret backend key cache name capabilities create update description decrypt resource vault policy operator namespace vault transit secret backend key cache namespace name operator policy data vault policy document operator transit hcl CodeBlockConfig CodeTabs Step 2 Create a kubernetes authentication role Use the Vault CLI or Terraform to create a Kubernetes authentication role for Vault Secrets Operator CodeTabs CodeBlockConfig shell session export VAULT NAMESPACE VAULT NAMESPACE vault write auth VAULT KUBERNETES PATH role operator bound service account names vault secrets operator controller manager bound service account namespaces vault secrets operator token period 24h token policies operator audience vault CodeBlockConfig CodeBlockConfig hcl data vault auth backend kubernetes namespace VAULT NAMESPACE path VAULT KUBERNETES PATH resource vault kubernetes auth backend config local namespace data vault auth backend kubernetes namespace backend data vault auth backend kubernetes path kubernetes host https kubernetes default svc resource vault kubernetes auth backend role operator namespace data vault auth backend kubernetes namespace backend vault kubernetes auth backend config local backend role name operator bound service account names vault secrets operator controller manager bound service account namespaces vault secrets operator token period 120 token policies vault policy operator name audience vault CodeBlockConfig CodeTabs Step 3 Configure a Vault connection for VSO Use the Vault Secrets Operator API to add a VaultConnection vault docs platform k8s vso api reference vaultconnection between VSO and your Vault server Note If you already have a connection for VSO continue to the next step Note yaml apiVersion secrets hashicorp com v1beta1 kind VaultConnection metadata name local vault server namespace vault secrets operator spec address https vault vault svc cluster local 8200 Step 4 Enable encrypted client cache storage Tabs Tab heading Helm For Helm installs vault docs platform k8s vso installation installation using helm install or upgrade your server clientCache vault docs platform k8s vso helm v controller manager clientcache configuration yaml controller manager clientCache persistenceModel direct encrypted storageEncryption enabled true vaultConnectionRef local vault server keyName vso client cache transitMount VAULT TRANSIT PATH namespace VAULT NAMESPACE method kubernetes mount VAULT KUBERNETES PATH kubernetes role operator serviceAccount vault secrets operator controller manager tokenAudiences vault Tab Tab heading OLM OperatorHub For OpenShift OperatorHub vault docs platform k8s vso openshift operatorhub installs 1 Add a VaultAuth vault docs platform k8s vso api reference vaultauth to entry to your cluster for storage set cacheStorageEncryption to true add a spec storageEncryption vault docs platform k8s vso api reference vaultauthspec configuration yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata name operator namespace vault secrets operator labels cacheStorageEncryption true spec kubernetes role operator serviceAccount vault secrets operator controller manager tokenExpirationSeconds 600 audiences vault method kubernetes mount VAULT KUBERNETES PATH namespace VAULT NAMESPACE storageEncryption keyName vso client cache mount VAULT TRANSIT PATH vaultConnectionRef local vault server 1 Set the VSO CLIENT CACHE PERSISTENCE MODEL environment variable in VSO s subscription CodeBlockConfig highlight 6 7 yaml spec name vault secrets operator channel stable config env name VSO CLIENT CACHE PERSISTENCE MODEL value direct encrypted CodeBlockConfig With the operator installed through OperatorHub edit your subscription and set the VSO CLIENT CACHE PERSISTENCE MODEL environment variable Tabs Tab heading Web Console ol li Navigate to the b Operators b menu li li Select b Installed Operators b li li Select Vault Secrets Operator li li Click Edit Subscription in the top right action menu li ol Tab Tab heading CLI shell session oc edit subscription vault secrets operator n vault secrets operator Tab Tabs The pod in the operator deployment restarts once the VSO CLIENT CACHE PERSISTENCE MODEL environment variable persists Tab Tabs Optional Verify client cache storage and encryption 1 Confirm the Vault Secrets Operator logs the following information on startup CodeBlockConfig hideClipboard json Starting manager clientCachePersistenceModel direct encrypted clientCacheSize 10000 CodeBlockConfig 1 Confirm the Vault Secrets Operator logs a Setting up Vault Client for storage encryption message when authenticating to Vault on behalf of a user CodeBlockConfig hideClipboard json level info ts 2024 02 22T00 41 46Z logger clientCacheFactory msg Setting up Vault Client for storage encryption persist true enforceEncryption true cacheKey kubernetes 59ebf88ccb963a22226bad CodeBlockConfig 1 Verify the encrypted cache is stored as Kubernetes secrets under the correct namespace with the prefix vso cc auth method For example CodeBlockConfig hideClipboard shell session kubectl get secrets n vault secrets operator NAME TYPE DATA AGE vso cc kubernetes 0147431c618992b6adfed1 Opaque 2 73s CodeBlockConfig |
vault page title AWS auth support for Vault Secrets Operator The Vault Secrets Operator VSO supports layout docs AWS auth support for Vault Secrets Operator Learn how AWS authentication works for Vault Secrets Operator | ---
layout: docs
page_title: AWS auth support for Vault Secrets Operator
description: >-
Learn how AWS authentication works for Vault Secrets Operator
---
# AWS auth support for Vault Secrets Operator
The Vault Secrets Operator (VSO) supports
[AWS authentication](/vault/docs/auth/aws) when accessing Vault. VSO
can retrieve AWS credentials:
- from an [IRSA-enabled Kubernetes service account][aws-irsa].
- by inferring credentials from the underlying EKS node role.
- by inferring credentials from the EC2 instance profile of the instance
where the operator pod is running.
- from an explicitly provided static access key id and secret key.
The following examples illustrate how to configure a Vault role and the corresponding VaultAuth profile in VSO for different AWS authentication scenarios.
## IRSA
1. Follow the Amazon documentation for [IAM roles for service accounts][aws-irsa]
to add an OIDC provider so your Kubernetes service account can assume an IAM
role.
1. Create an appropriate authentication role in your Vault instance:
<CodeTabs>
<CodeBlockConfig>
```shell-session
$ vault write auth/aws/role/<VAULT_AWS_IRSA_ROLE> \
auth_type="iam" \
policies="default" \
bound_iam_principal_arn="arn:aws:iam::<ACCOUNT_ID>:role/<IAM_IRSA_ROLE>"
```
</CodeBlockConfig>
<CodeBlockConfig>
```hcl
resource "vault_aws_auth_backend_role" "aws_irsa_role" {
backend = "auth/aws"
role = <VAULT_AWS_IRSA_ROLE>
auth_type = "iam"
token_policies = ["default"]
bound_iam_principal_arns = [
"arn:aws:iam::<ACCOUNT_ID>:role/<IAM_IRSA_ROLE>",
]
}
```
</CodeBlockConfig>
</CodeTabs>
1. Create the corresponding authentication entry in VSO:
```yaml
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: vaultauth-aws-irsa-example
namespace: <K8S_NAMESPACE>
spec:
vaultConnectionRef: <VAULT_CONNECTION_NAME>
mount: aws
method: aws
aws:
role: <VAULT_AWS_IRSA_ROLE>
region: <AWS_REGION>
irsaServiceAccount: <SERVICE_ACCOUNT>
```
<Tip title="Terraform has IRSA support">
If you use Terraform to manage your Elastic Kubernetes (EKS) cluster, the
[AWS EKS module](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest)
includes IRSA support through the
[IRSA submodule](https://registry.terraform.io/modules/terraform-aws-modules/iam/aws/latest/submodules/iam-role-for-service-accounts-eks).
</Tip>
## Node role
1. Create an appropriate authentication role in your Vault instance:
<CodeTabs>
<CodeBlockConfig>
```shell-session
$ vault write auth/aws/role/<VAULT_AWS_NODE_ROLE> \
auth_type="iam" \
policies="default" \
bound_iam_principal_arn="arn:aws:iam::<ACCOUNT_ID>:role/eks-nodes-<EKS_CLUSTER_NAME>"
```
</CodeBlockConfig>
<CodeBlockConfig>
```hcl
resource "vault_aws_auth_backend_role" "aws_node_role" {
backend = "auth/aws"
role = <VAULT_AWS_NODE_ROLE>
auth_type = "iam"
token_policies = ["default"]
bound_iam_principal_arns = [
"arn:aws:iam::<ACCOUNT_ID>:role/eks-nodes-<EKS_CLUSTER_NAME>",
]
}
```
</CodeBlockConfig>
</CodeTabs>
1. Create the corresponding authentication entry in VSO:
```yaml
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: vaultauth-aws-node-example
namespace: <K8S_NAMESPACE>
spec:
vaultConnectionRef: <VAULT_CONNECTION_NAME>
mount: aws
method: aws
aws:
role: <VAULT_AWS_NODE_ROLE>
region: <AWS_REGION>
```
## Instance profile
1. Create an appropriate authentication role in your Vault instance:
<CodeTabs>
<CodeBlockConfig>
```shell-session
$ vault write auth/aws/role/<VAULT_AWS_INSTANCE_ROLE> \
auth_type="iam" \
policies="default" \
inferred_entity_type="ec2_instance" \
inferred_aws_region=-"<AWS_REGION>" \
bound_account_id="<ACCOUNT_ID>" \
bound_iam_principal_arn="arn:aws:iam::<ACCOUNT_ID>:instance-profile/eks-<INSTANCE_PROFILE_UUID>"
```
</CodeBlockConfig>
<CodeBlockConfig>
```hcl
resource "vault_aws_auth_backend_role" "aws_node_role" {
backend = "auth/aws"
role = <VAULT_AWS_INSTANCE_ROLE>
auth_type = "iam"
token_policies = ["default"]
inferred_entity_type = "ec2_instance"
inferred_aws_region = "<AWS_REGION>"
bound_account_ids = ["<ACCOUNT_ID>"]
bound_iam_principal_arns = [
"arn:aws:iam::<ACCOUNT_ID>:role/eks-nodes-<EKS_CLUSTER_NAME>",
]
}
```
</CodeBlockConfig>
</CodeTabs>
1. Create the corresponding authentication entry in VSO:
```yaml
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: vaultauth-aws-instance-example
namespace: <K8S_NAMESPACE>
spec:
vaultConnectionRef: <VAULT_CONNECTION_NAME>
mount: aws
method: aws
aws:
role: <VAULT_AWS_INSTANCE_ROLE>
region: <AWS_REGION>
```
## Static credentials
1. Create an appropriate authentication role in your Vault instance:
<CodeTabs>
<CodeBlockConfig>
```shell-session
$ vault write auth/aws/role/<VAULT_AWS_STATIC_ROLE> \
auth_type="iam" \
policies="default" \
bound_iam_principal_arn="arn:aws:iam::<ACCOUNT_ID>:role/<IAM_ROLE>"
```
</CodeBlockConfig>
<CodeBlockConfig>
```hcl
resource "vault_aws_auth_backend_role" "aws_static_role" {
backend = "auth/aws"
role = <VAULT_AWS_STATIC_ROLE>
auth_type = "iam"
token_policies = ["default"]
bound_iam_principal_arns = [
"arn:aws:iam::<ACCOUNT_ID>:role/<IAM_ROLE>",
]
}
```
</CodeBlockConfig>
</CodeTabs>
1. Create the corresponding authentication entry in VSO:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: aws-static-creds
namespace: <K8S_NAMESPACE>
data:
access_key_id: <AWS_ACCESS_KEY_ID>
secret_access_key: <AWS_SECRET_ACCESS_KEY>
session_token: <AWS_SESSION_TOKEN> # session_token is optional
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: vaultauth-aws-static-example
namespace: <K8S_NAMESPACE>
spec:
vaultConnectionRef: <VAULT_CONNECTION_NAME>
mount: aws
method: aws
aws:
role: <VAULT_AWS_STATIC_ROLE>
region: <AWS_REGION>
secretRef: aws-static-creds
```
# API
See the full list of AWS VaultAuth options on the [VSO API page](/vault/docs/platform/k8s/vso/api-reference#vaultauthconfigaws).
[aws-irsa]: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html | vault | layout docs page title AWS auth support for Vault Secrets Operator description Learn how AWS authentication works for Vault Secrets Operator AWS auth support for Vault Secrets Operator The Vault Secrets Operator VSO supports AWS authentication vault docs auth aws when accessing Vault VSO can retrieve AWS credentials from an IRSA enabled Kubernetes service account aws irsa by inferring credentials from the underlying EKS node role by inferring credentials from the EC2 instance profile of the instance where the operator pod is running from an explicitly provided static access key id and secret key The following examples illustrate how to configure a Vault role and the corresponding VaultAuth profile in VSO for different AWS authentication scenarios IRSA 1 Follow the Amazon documentation for IAM roles for service accounts aws irsa to add an OIDC provider so your Kubernetes service account can assume an IAM role 1 Create an appropriate authentication role in your Vault instance CodeTabs CodeBlockConfig shell session vault write auth aws role VAULT AWS IRSA ROLE auth type iam policies default bound iam principal arn arn aws iam ACCOUNT ID role IAM IRSA ROLE CodeBlockConfig CodeBlockConfig hcl resource vault aws auth backend role aws irsa role backend auth aws role VAULT AWS IRSA ROLE auth type iam token policies default bound iam principal arns arn aws iam ACCOUNT ID role IAM IRSA ROLE CodeBlockConfig CodeTabs 1 Create the corresponding authentication entry in VSO yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata name vaultauth aws irsa example namespace K8S NAMESPACE spec vaultConnectionRef VAULT CONNECTION NAME mount aws method aws aws role VAULT AWS IRSA ROLE region AWS REGION irsaServiceAccount SERVICE ACCOUNT Tip title Terraform has IRSA support If you use Terraform to manage your Elastic Kubernetes EKS cluster the AWS EKS module https registry terraform io modules terraform aws modules eks aws latest includes IRSA support through the IRSA submodule https registry terraform io modules terraform aws modules iam aws latest submodules iam role for service accounts eks Tip Node role 1 Create an appropriate authentication role in your Vault instance CodeTabs CodeBlockConfig shell session vault write auth aws role VAULT AWS NODE ROLE auth type iam policies default bound iam principal arn arn aws iam ACCOUNT ID role eks nodes EKS CLUSTER NAME CodeBlockConfig CodeBlockConfig hcl resource vault aws auth backend role aws node role backend auth aws role VAULT AWS NODE ROLE auth type iam token policies default bound iam principal arns arn aws iam ACCOUNT ID role eks nodes EKS CLUSTER NAME CodeBlockConfig CodeTabs 1 Create the corresponding authentication entry in VSO yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata name vaultauth aws node example namespace K8S NAMESPACE spec vaultConnectionRef VAULT CONNECTION NAME mount aws method aws aws role VAULT AWS NODE ROLE region AWS REGION Instance profile 1 Create an appropriate authentication role in your Vault instance CodeTabs CodeBlockConfig shell session vault write auth aws role VAULT AWS INSTANCE ROLE auth type iam policies default inferred entity type ec2 instance inferred aws region AWS REGION bound account id ACCOUNT ID bound iam principal arn arn aws iam ACCOUNT ID instance profile eks INSTANCE PROFILE UUID CodeBlockConfig CodeBlockConfig hcl resource vault aws auth backend role aws node role backend auth aws role VAULT AWS INSTANCE ROLE auth type iam token policies default inferred entity type ec2 instance inferred aws region AWS REGION bound account ids ACCOUNT ID bound iam principal arns arn aws iam ACCOUNT ID role eks nodes EKS CLUSTER NAME CodeBlockConfig CodeTabs 1 Create the corresponding authentication entry in VSO yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata name vaultauth aws instance example namespace K8S NAMESPACE spec vaultConnectionRef VAULT CONNECTION NAME mount aws method aws aws role VAULT AWS INSTANCE ROLE region AWS REGION Static credentials 1 Create an appropriate authentication role in your Vault instance CodeTabs CodeBlockConfig shell session vault write auth aws role VAULT AWS STATIC ROLE auth type iam policies default bound iam principal arn arn aws iam ACCOUNT ID role IAM ROLE CodeBlockConfig CodeBlockConfig hcl resource vault aws auth backend role aws static role backend auth aws role VAULT AWS STATIC ROLE auth type iam token policies default bound iam principal arns arn aws iam ACCOUNT ID role IAM ROLE CodeBlockConfig CodeTabs 1 Create the corresponding authentication entry in VSO yaml apiVersion v1 kind Secret metadata name aws static creds namespace K8S NAMESPACE data access key id AWS ACCESS KEY ID secret access key AWS SECRET ACCESS KEY session token AWS SESSION TOKEN session token is optional apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata name vaultauth aws static example namespace K8S NAMESPACE spec vaultConnectionRef VAULT CONNECTION NAME mount aws method aws aws role VAULT AWS STATIC ROLE region AWS REGION secretRef aws static creds API See the full list of AWS VaultAuth options on the VSO API page vault docs platform k8s vso api reference vaultauthconfigaws aws irsa https docs aws amazon com eks latest userguide iam roles for service accounts html |
vault Authenticate to Vault with the Vault Secrets Operator Vault authentication in detail page title Vault Secrets Operator Vault authentication details include vso common links mdx layout docs | ---
layout: docs
page_title: 'Vault Secrets Operator: Vault authentication details'
description: >-
Authenticate to Vault with the Vault Secrets Operator.
---
@include 'vso/common-links.mdx'
# Vault authentication in detail
## Auth configuration
The Vault Secrets Operator (VSO) relies on `VaultAuth` resources to authenticate with Vault. It relies on credential
providers to generate the credentials necessary for authentication. For example, when VSO authenticates to a kubernetes
auth backend, it generates a token using the Kubernetes service account configured in the VaultAuth resource's defined
kubernetes auth method. The service account must be configured in the Kubernetes namespace of the requesting resource.
Meaning, if a resource like a `VaultStaticSecret` is created in the `apps` namespace, the service account must be in
the apps namespace. The rationale behind this approach is to ensure that cross namespace access is not possible.
## Vault authentication globals
The `VaultAuthGlobal` resource is a global configuration that allows you to share a single authentication configuration
across a set of VaultAuth resources. This is useful when you have multiple VaultAuth resources that share the
same base configuration. For example, if you have multiple VaultAuth resources that all authenticate to Vault
using the same auth backend, you can create a single VaultAuthGlobal resource that defines the configuration
common to all VaultAuth instances. Options like `mount`, `method`, `namespace`, and method specific configuration
can all be inherited from the VaultAuthGlobal resource. Any field in the VaultAuth resource can be inherited
from a VaultAuthGlobal instance. Typically, most fields are inherited from the VaultAuthGlobal,
fields like `role`, and credential provider specific fields like `serviceAccount` are usually set on the referring
VaultAuth instance, since they are more specific to the application that requires the VaultAuth resource.
*See [VaultAuthGlobal spec][vag-spec] and [VaultAuth spec][va-spec] for the complete list of available fields.*
## VaultAuthGlobal configuration inheritance
- The configuration in the VaultAuth resource takes precedence over the configuration in the VaultAuthGlobal resource.
- The VaultAuthGlobal can reside in any namespace, but must allow the namespace of the VaultAuth resource to reference it.
- Default VaultAuthGlobal resources are denoted by the name `default` and are automatically referenced by all VaultAuth resources
when `spec.vaultAuthGlobalRef.allowDefault` is set to `true` and VSO is running with the `allow-default-globals`
option set in the `-global-vault-auth-options` flag (the default).
- When a `spec.vaultAuthGlobalRef.namespace` is set, the search for the default VaultAuthGlobal resource is
constrained to that namespace. Otherwise, the search order is:
1. The default VaultAuthGlobal resource in the referring VaultAuth resource's namespace.
2. The default VaultAuthGlobal resource in the Operator's namespace.
## Sample use cases and configurations
The following sections provide some sample use cases and configurations for the VaultAuthGlobal resource. These
examples demonstrate how to use the VaultAuthGlobal resource to share a common authentication configuration across a
set of VaultAuth resources. Like other namespaced VSO custom resource definitions, there can be many VaultAuthGlobal
resources configured in a single Kubernetes cluster.
### Multiple applications with shared authentication backend
A Vault admin has configured a Kubernetes auth backend in Vault mounted at `kubernetes`. The admin expects to have
two applications authenticate using their own roles, and service accounts. The admin creates the necessary roles in
Vault bound to the service accounts and namespaces of the applications.
The admin creates a default VaultAuthGlobal with the following configuration:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuthGlobal
metadata:
name: default
namespace: admin
spec:
allowedNamespaces:
- apps
defaultAuthMethod: kubernetes
kubernetes:
audiences:
- vault
mount: kubernetes
role: default
serviceAccount: default
tokenExpirationSeconds: 600
```
A developer creates a `VaultAuth` and VaultStaticSecret resource in their application's namespace with the following
configurations:
Application 1 would have a configuration like this:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: app1
namespace: apps
spec:
kubernetes:
role: app1
serviceAccount: app1
vaultAuthGlobalRef:
allowDefault: true
namespace: admin
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: app1-secret
namespace: apps
spec:
destination:
create: true
name: app1-secret
hmacSecretData: true
mount: apps
path: app1
type: kv-v2
vaultAuthRef: app1
```
Application 2 would have a similar configuration:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: app2
namespace: apps
spec:
kubernetes:
role: app2
serviceAccount: app2
vaultAuthGlobalRef:
allowDefault: true
namespace: admin
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: app2-secret
namespace: apps
spec:
destination:
create: true
name: app2-secret
hmacSecretData: true
mount: apps
path: app2
type: kv-v2
vaultAuthRef: app2
```
#### Explanation
- The default VaultAuthGlobal resource is created in the `admin` namespace. This resource defines the
common configuration for all VaultAuth resources that reference it. The `allowedNamespaces` field restricts the
VaultAuth resources that can reference this VaultAuthGlobal resource. In this case, only resources in the `apps`
namespace can reference this VaultAuthGlobal resource.
- The VaultAuth resources in the `apps` namespace reference the VaultAuthGlobal resource. This allows the VaultAuth
resources to inherit the configuration from the VaultAuthGlobal resource. The `role` and `serviceAccount` fields are
specific to the application and are not inherited from the VaultAuthGlobal resource. Since the
`.spec.vaultAuthGlobalRef.allowDefault` field is set to `true`, the VaultAuth resources will automatically reference the
`default` VaultAuthGlobal in defined namespace.
- The VaultStaticSecret resources in the `apps` namespace reference the VaultAuth resources. This allows the
VaultStaticSecret resources to authenticate to Vault in order to sync the KV secrets to the destination Kubernetes
Secret.
### Multiple applications with shared authentication backend and role
A Vault admin has configured a Kubernetes auth backend in Vault mounted at `kubernetes`. The admin expects to have
two applications authenticate using a single role, and service account. The admin creates the necessary role in
Vault bound to the same service account and namespace of the applications.
The admin or developer creates a default VaultAuthGlobal in the application's namespace with the following
configuration:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuthGlobal
metadata:
name: default
namespace: apps
spec:
defaultAuthMethod: kubernetes
kubernetes:
audiences:
- vault
mount: kubernetes
role: apps
serviceAccount: apps
tokenExpirationSeconds: 600
```
A developer creates single VaultAuth and the necessary VaultStatic secrets in their application's namespace with the
following:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: apps
namespace: apps
spec:
vaultAuthGlobalRef:
allowDefault: true
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: app1-secret
namespace: apps
spec:
destination:
create: true
name: app1-secret
hmacSecretData: true
mount: apps
path: app1
type: kv-v2
vaultAuthRef: apps
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: app2-secret
namespace: apps
spec:
destination:
create: true
name: app2-secret
hmacSecretData: true
mount: apps
path: app2
type: kv-v2
vaultAuthRef: apps
```
#### Explanation
- The default VaultAuthGlobal resource is created in the `apps` namespace. It provides all the necessary configuration
for the VaultAuth resources that reference it.
- A single VaultAuth resource is created in the `apps` namespace. This resource references the VaultAuthGlobal resource
and inherits the configuration from it.
- The VaultStaticSecret resources in the `apps` namespace reference the VaultAuth resource. This allows the VaultStaticSecret
resources to authenticate to Vault in order to sync the KV secrets to the destination Kubernetes Secret.
### Multiple applications with multiple authentication backends and roles
A Vault admin has configured a Kubernetes auth backend in Vault mounted at `kubernetes`. In addition, the Vault
admin has configured a JWT auth backend mounted at `jwt`. The admin creates the necessary roles in Vault for each
auth method. The admin expects to have two applications authenticate, one using `kubernetes` auth and the other using `jwt` auth.
The admin or developer creates a default VaultAuthGlobal in the application's namespace with the following
configuration:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuthGlobal
metadata:
name: default
namespace: apps
spec:
defaultAuthMethod: kubernetes
kubernetes:
audiences:
- vault
mount: kubernetes
role: apps
serviceAccount: apps-k8s
tokenExpirationSeconds: 600
jwt:
audiences:
- vault
mount: jwt
role: apps
serviceAccount: apps-jwt
```
A developer creates a VaultAuth and VaultStaticSecret resource in their application's namespace with the following
configurations:
Application 1 would have a configuration like this which will be using the kubernetes auth method:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: apps-default
namespace: apps
spec:
# uses the default kubernetes auth method as defined in
# the VaultAuthGlobal .spec.defaultAuthMethod
vaultAuthGlobalRef:
allowDefault: true
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: app1-secret
namespace: apps
spec:
destination:
create: true
name: app1-secret
hmacSecretData: true
mount: apps
path: app1
type: kv-v2
vaultAuthRef: apps-default
```
Application 2 would have a similar configuration, except it will be using the JWT auth method:
```yaml
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: apps-jwt
namespace: apps
spec:
method: jwt
vaultAuthGlobalRef:
allowDefault: true
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: app2-secret
namespace: apps
spec:
destination:
create: true
name: app2-secret
hmacSecretData: true
mount: apps
path: app2
type: kv-v2
vaultAuthRef: apps-jwt
```
#### Explanation
- The default VaultAuthGlobal resource is created in the `apps` namespace. It provides all the necessary configuration
for the VaultAuth resources that reference it. The `defaultAuthMethod` field defines the default auth method to use
when authenticating to Vault. The `kubernetes` and `jwt` fields define the configuration for the respective auth
method.
- Application 1 uses the default kubernetes auth method defined in the VaultAuthGlobal resource. The VaultAuth resource
references the VaultAuthGlobal resource and inherits the kubernetes auth configuration from it.
- Application 2 uses the JWT auth method defined in the VaultAuthGlobal resource. The VaultAuth resource references the
VaultAuthGlobal resource and inherits the JWT auth configuration from it.
- Neither VaultAuth resource has a `role` or `serviceAccount` field set. This is because the `role` and `serviceAccount`
fields are defined in the VaultAuthGlobal resource and are inherited by the VaultAuth resources.
## VaultAuthGlobal common errors and troubleshooting
There are few sources for tracking down issues with VaultAuthGlobal resources:
- Vault Secrets Operator logs
- Kubernetes events
- Resource status
Below are examples of errors from each source and how to resolve them:
Sample output sync failures from the Vault Secrets Operator logs:
```json
{
"level": "error",
"ts": "2024-07-16T17:35:20Z",
"logger": "cachingClientFactory",
"msg": "Failed to get cacheKey from obj",
"controller": "vaultstaticsecret",
"controllerGroup": "secrets.hashicorp.com",
"controllerKind": "VaultStaticSecret",
"VaultStaticSecret": {
"name": "app1",
"namespace": "apps"
},
"namespace": "apps",
"name": "app1",
"reconcileID": "5201f597-6c5d-4d07-ae8f-30a39c80dc54",
"error": "failed getting admin/default, err=VaultAuthGlobal.secrets.hashicorp.com \"default\" not found"
}
```
Check for related Kubernetes events:
```shell
$ kubectl events --types=Warning -n admin --for vaultauths.secrets.hashicorp.com/default -o json
```
Sample output from the Kubernetes event for the VaultAuth resource:
```json
{
"kind": "Event",
"apiVersion": "v1",
"metadata": {
"name": "default.17e2c0da7b0e36b5",
"namespace": "admin",
"uid": "3ca6088e-7391-4b76-9443-a790ccae02c0",
"resourceVersion": "634396",
"creationTimestamp": "2024-07-16T17:14:12Z"
},
"involvedObject": {
"kind": "VaultAuth",
"namespace": "admin",
"name": "default",
"uid": "1dabe3a5-5479-4f5d-ac48-5db7eff7f822",
"apiVersion": "secrets.hashicorp.com/v1beta1",
"resourceVersion": "631994"
},
"reason": "Accepted",
"message": "Failed to handle VaultAuth resource request: err=failed getting admin/default, err=VaultAuthGlobal.secrets.hashicorp.com \"default\" not found",
"source": {
"component": "VaultAuth"
},
"firstTimestamp": "2024-07-16T17:14:12Z",
"lastTimestamp": "2024-07-16T17:15:53Z",
"count": 25,
"type": "Warning",
"eventTime": null,
"reportingComponent": "VaultAuth",
"reportingInstance": ""
}
```
Check the conditions on the VaultAuth resource:
```shell
$ kubectl get vaultauths.secrets.hashicorp.com -n admin default -o jsonpath='{.status}'
```
Sample output of the VaultAuth's status (prettified). The `valid` field will be `false` for the condition reason
`VaultAuthGlobalRef`:
```json
{
"conditions": [
{
"lastTransitionTime": "2024-07-16T15:35:43Z",
"message": "failed getting admin/default, err=VaultAuthGlobal.secrets.hashicorp.com \"default\" not found",
"observedGeneration": 3,
"reason": "VaultAuthGlobalRef",
"status": "False",
"type": "Available"
}
],
"specHash": "e264f241cb4ad776802924b6ad2aa272b11cffd570382605d1c2ddbdfd661ad3",
"valid": false
}
```
- **Situation**: The VaultAuthGlobal resource is not found or is invalid for some reason, denoted by error messages like
`not found...`.
**Resolution**: Ensure that the VaultAuthGlobal resource exists in the referring VaultAuth's namespace or a default
VaultAuthGlobal resource exists per [VaultAuthGlobal configuration inheritance]
(#vaultauthglobal-configuration-inheritance)
- **Situation**: The VaultAuthGlobal is not allowed to be referenced by the VaultAuth resource, denoted by error
messages like `target namespace "apps" is not allowed...`.
**Resolution**: Ensure that the VaultAuthGlobal resource's `spec.allowedNamespaces` field includes the namespace of the
VaultAuth resource.
- **Situation**: The VaultAuth resource is not valid due to missing required fields, denoted by error messages like
`invalid merge: empty role`.
**Resolution**: Ensure all required fields are set either on the VaultAuth resource or on the inherited
VaultAuthGlobal.
A successfully merged VaultAuth resource will have the `valid` field set to `true` and the `conditions` will look
something like:
```json
{
"conditions": [
{
"lastTransitionTime": "2024-07-17T13:46:43Z",
"message": "VaultAuthGlobal successfully merged, key=admin/default, uid=6aeb3559-8f42-48bf-b16a-2305bc9a9bed, generation=7",
"observedGeneration": 1,
"reason": "VaultAuthGlobalRef",
"status": "True",
"type": "Available"
}
],
"specHash": "5cbe5544d0557926e00002514871b95c49903a9d4496ef9b794c84f1e54db1a0",
"valid": true
}
```
<Tip>
The value for the key in the message field is the namespace/name of the VaultAuthGlobal object that was successfully merged.
This is useful if you want to know which VaultAuthGlobal object was used to merge the VaultAuth object.
</Tip>
## Some authentication engines in detail
- [AWS](/vault/docs/auth/aws)
- [GCP](/vault/docs/auth/gcp) | vault | layout docs page title Vault Secrets Operator Vault authentication details description Authenticate to Vault with the Vault Secrets Operator include vso common links mdx Vault authentication in detail Auth configuration The Vault Secrets Operator VSO relies on VaultAuth resources to authenticate with Vault It relies on credential providers to generate the credentials necessary for authentication For example when VSO authenticates to a kubernetes auth backend it generates a token using the Kubernetes service account configured in the VaultAuth resource s defined kubernetes auth method The service account must be configured in the Kubernetes namespace of the requesting resource Meaning if a resource like a VaultStaticSecret is created in the apps namespace the service account must be in the apps namespace The rationale behind this approach is to ensure that cross namespace access is not possible Vault authentication globals The VaultAuthGlobal resource is a global configuration that allows you to share a single authentication configuration across a set of VaultAuth resources This is useful when you have multiple VaultAuth resources that share the same base configuration For example if you have multiple VaultAuth resources that all authenticate to Vault using the same auth backend you can create a single VaultAuthGlobal resource that defines the configuration common to all VaultAuth instances Options like mount method namespace and method specific configuration can all be inherited from the VaultAuthGlobal resource Any field in the VaultAuth resource can be inherited from a VaultAuthGlobal instance Typically most fields are inherited from the VaultAuthGlobal fields like role and credential provider specific fields like serviceAccount are usually set on the referring VaultAuth instance since they are more specific to the application that requires the VaultAuth resource See VaultAuthGlobal spec vag spec and VaultAuth spec va spec for the complete list of available fields VaultAuthGlobal configuration inheritance The configuration in the VaultAuth resource takes precedence over the configuration in the VaultAuthGlobal resource The VaultAuthGlobal can reside in any namespace but must allow the namespace of the VaultAuth resource to reference it Default VaultAuthGlobal resources are denoted by the name default and are automatically referenced by all VaultAuth resources when spec vaultAuthGlobalRef allowDefault is set to true and VSO is running with the allow default globals option set in the global vault auth options flag the default When a spec vaultAuthGlobalRef namespace is set the search for the default VaultAuthGlobal resource is constrained to that namespace Otherwise the search order is 1 The default VaultAuthGlobal resource in the referring VaultAuth resource s namespace 2 The default VaultAuthGlobal resource in the Operator s namespace Sample use cases and configurations The following sections provide some sample use cases and configurations for the VaultAuthGlobal resource These examples demonstrate how to use the VaultAuthGlobal resource to share a common authentication configuration across a set of VaultAuth resources Like other namespaced VSO custom resource definitions there can be many VaultAuthGlobal resources configured in a single Kubernetes cluster Multiple applications with shared authentication backend A Vault admin has configured a Kubernetes auth backend in Vault mounted at kubernetes The admin expects to have two applications authenticate using their own roles and service accounts The admin creates the necessary roles in Vault bound to the service accounts and namespaces of the applications The admin creates a default VaultAuthGlobal with the following configuration yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuthGlobal metadata name default namespace admin spec allowedNamespaces apps defaultAuthMethod kubernetes kubernetes audiences vault mount kubernetes role default serviceAccount default tokenExpirationSeconds 600 A developer creates a VaultAuth and VaultStaticSecret resource in their application s namespace with the following configurations Application 1 would have a configuration like this yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata name app1 namespace apps spec kubernetes role app1 serviceAccount app1 vaultAuthGlobalRef allowDefault true namespace admin apiVersion secrets hashicorp com v1beta1 kind VaultStaticSecret metadata name app1 secret namespace apps spec destination create true name app1 secret hmacSecretData true mount apps path app1 type kv v2 vaultAuthRef app1 Application 2 would have a similar configuration yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata name app2 namespace apps spec kubernetes role app2 serviceAccount app2 vaultAuthGlobalRef allowDefault true namespace admin apiVersion secrets hashicorp com v1beta1 kind VaultStaticSecret metadata name app2 secret namespace apps spec destination create true name app2 secret hmacSecretData true mount apps path app2 type kv v2 vaultAuthRef app2 Explanation The default VaultAuthGlobal resource is created in the admin namespace This resource defines the common configuration for all VaultAuth resources that reference it The allowedNamespaces field restricts the VaultAuth resources that can reference this VaultAuthGlobal resource In this case only resources in the apps namespace can reference this VaultAuthGlobal resource The VaultAuth resources in the apps namespace reference the VaultAuthGlobal resource This allows the VaultAuth resources to inherit the configuration from the VaultAuthGlobal resource The role and serviceAccount fields are specific to the application and are not inherited from the VaultAuthGlobal resource Since the spec vaultAuthGlobalRef allowDefault field is set to true the VaultAuth resources will automatically reference the default VaultAuthGlobal in defined namespace The VaultStaticSecret resources in the apps namespace reference the VaultAuth resources This allows the VaultStaticSecret resources to authenticate to Vault in order to sync the KV secrets to the destination Kubernetes Secret Multiple applications with shared authentication backend and role A Vault admin has configured a Kubernetes auth backend in Vault mounted at kubernetes The admin expects to have two applications authenticate using a single role and service account The admin creates the necessary role in Vault bound to the same service account and namespace of the applications The admin or developer creates a default VaultAuthGlobal in the application s namespace with the following configuration yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuthGlobal metadata name default namespace apps spec defaultAuthMethod kubernetes kubernetes audiences vault mount kubernetes role apps serviceAccount apps tokenExpirationSeconds 600 A developer creates single VaultAuth and the necessary VaultStatic secrets in their application s namespace with the following yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata name apps namespace apps spec vaultAuthGlobalRef allowDefault true apiVersion secrets hashicorp com v1beta1 kind VaultStaticSecret metadata name app1 secret namespace apps spec destination create true name app1 secret hmacSecretData true mount apps path app1 type kv v2 vaultAuthRef apps apiVersion secrets hashicorp com v1beta1 kind VaultStaticSecret metadata name app2 secret namespace apps spec destination create true name app2 secret hmacSecretData true mount apps path app2 type kv v2 vaultAuthRef apps Explanation The default VaultAuthGlobal resource is created in the apps namespace It provides all the necessary configuration for the VaultAuth resources that reference it A single VaultAuth resource is created in the apps namespace This resource references the VaultAuthGlobal resource and inherits the configuration from it The VaultStaticSecret resources in the apps namespace reference the VaultAuth resource This allows the VaultStaticSecret resources to authenticate to Vault in order to sync the KV secrets to the destination Kubernetes Secret Multiple applications with multiple authentication backends and roles A Vault admin has configured a Kubernetes auth backend in Vault mounted at kubernetes In addition the Vault admin has configured a JWT auth backend mounted at jwt The admin creates the necessary roles in Vault for each auth method The admin expects to have two applications authenticate one using kubernetes auth and the other using jwt auth The admin or developer creates a default VaultAuthGlobal in the application s namespace with the following configuration yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuthGlobal metadata name default namespace apps spec defaultAuthMethod kubernetes kubernetes audiences vault mount kubernetes role apps serviceAccount apps k8s tokenExpirationSeconds 600 jwt audiences vault mount jwt role apps serviceAccount apps jwt A developer creates a VaultAuth and VaultStaticSecret resource in their application s namespace with the following configurations Application 1 would have a configuration like this which will be using the kubernetes auth method yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata name apps default namespace apps spec uses the default kubernetes auth method as defined in the VaultAuthGlobal spec defaultAuthMethod vaultAuthGlobalRef allowDefault true apiVersion secrets hashicorp com v1beta1 kind VaultStaticSecret metadata name app1 secret namespace apps spec destination create true name app1 secret hmacSecretData true mount apps path app1 type kv v2 vaultAuthRef apps default Application 2 would have a similar configuration except it will be using the JWT auth method yaml apiVersion secrets hashicorp com v1beta1 kind VaultAuth metadata name apps jwt namespace apps spec method jwt vaultAuthGlobalRef allowDefault true apiVersion secrets hashicorp com v1beta1 kind VaultStaticSecret metadata name app2 secret namespace apps spec destination create true name app2 secret hmacSecretData true mount apps path app2 type kv v2 vaultAuthRef apps jwt Explanation The default VaultAuthGlobal resource is created in the apps namespace It provides all the necessary configuration for the VaultAuth resources that reference it The defaultAuthMethod field defines the default auth method to use when authenticating to Vault The kubernetes and jwt fields define the configuration for the respective auth method Application 1 uses the default kubernetes auth method defined in the VaultAuthGlobal resource The VaultAuth resource references the VaultAuthGlobal resource and inherits the kubernetes auth configuration from it Application 2 uses the JWT auth method defined in the VaultAuthGlobal resource The VaultAuth resource references the VaultAuthGlobal resource and inherits the JWT auth configuration from it Neither VaultAuth resource has a role or serviceAccount field set This is because the role and serviceAccount fields are defined in the VaultAuthGlobal resource and are inherited by the VaultAuth resources VaultAuthGlobal common errors and troubleshooting There are few sources for tracking down issues with VaultAuthGlobal resources Vault Secrets Operator logs Kubernetes events Resource status Below are examples of errors from each source and how to resolve them Sample output sync failures from the Vault Secrets Operator logs json level error ts 2024 07 16T17 35 20Z logger cachingClientFactory msg Failed to get cacheKey from obj controller vaultstaticsecret controllerGroup secrets hashicorp com controllerKind VaultStaticSecret VaultStaticSecret name app1 namespace apps namespace apps name app1 reconcileID 5201f597 6c5d 4d07 ae8f 30a39c80dc54 error failed getting admin default err VaultAuthGlobal secrets hashicorp com default not found Check for related Kubernetes events shell kubectl events types Warning n admin for vaultauths secrets hashicorp com default o json Sample output from the Kubernetes event for the VaultAuth resource json kind Event apiVersion v1 metadata name default 17e2c0da7b0e36b5 namespace admin uid 3ca6088e 7391 4b76 9443 a790ccae02c0 resourceVersion 634396 creationTimestamp 2024 07 16T17 14 12Z involvedObject kind VaultAuth namespace admin name default uid 1dabe3a5 5479 4f5d ac48 5db7eff7f822 apiVersion secrets hashicorp com v1beta1 resourceVersion 631994 reason Accepted message Failed to handle VaultAuth resource request err failed getting admin default err VaultAuthGlobal secrets hashicorp com default not found source component VaultAuth firstTimestamp 2024 07 16T17 14 12Z lastTimestamp 2024 07 16T17 15 53Z count 25 type Warning eventTime null reportingComponent VaultAuth reportingInstance Check the conditions on the VaultAuth resource shell kubectl get vaultauths secrets hashicorp com n admin default o jsonpath status Sample output of the VaultAuth s status prettified The valid field will be false for the condition reason VaultAuthGlobalRef json conditions lastTransitionTime 2024 07 16T15 35 43Z message failed getting admin default err VaultAuthGlobal secrets hashicorp com default not found observedGeneration 3 reason VaultAuthGlobalRef status False type Available specHash e264f241cb4ad776802924b6ad2aa272b11cffd570382605d1c2ddbdfd661ad3 valid false Situation The VaultAuthGlobal resource is not found or is invalid for some reason denoted by error messages like not found Resolution Ensure that the VaultAuthGlobal resource exists in the referring VaultAuth s namespace or a default VaultAuthGlobal resource exists per VaultAuthGlobal configuration inheritance vaultauthglobal configuration inheritance Situation The VaultAuthGlobal is not allowed to be referenced by the VaultAuth resource denoted by error messages like target namespace apps is not allowed Resolution Ensure that the VaultAuthGlobal resource s spec allowedNamespaces field includes the namespace of the VaultAuth resource Situation The VaultAuth resource is not valid due to missing required fields denoted by error messages like invalid merge empty role Resolution Ensure all required fields are set either on the VaultAuth resource or on the inherited VaultAuthGlobal A successfully merged VaultAuth resource will have the valid field set to true and the conditions will look something like json conditions lastTransitionTime 2024 07 17T13 46 43Z message VaultAuthGlobal successfully merged key admin default uid 6aeb3559 8f42 48bf b16a 2305bc9a9bed generation 7 observedGeneration 1 reason VaultAuthGlobalRef status True type Available specHash 5cbe5544d0557926e00002514871b95c49903a9d4496ef9b794c84f1e54db1a0 valid true Tip The value for the key in the message field is the namespace name of the VaultAuthGlobal object that was successfully merged This is useful if you want to know which VaultAuthGlobal object was used to merge the VaultAuth object Tip Some authentication engines in detail AWS vault docs auth aws GCP vault docs auth gcp |
vault page title Configuration This section documents configuration options for the Vault Helm chart Configuration layout docs include helm version mdx | ---
layout: docs
page_title: Configuration
description: This section documents configuration options for the Vault Helm chart
---
# Configuration
@include 'helm/version.mdx'
The chart is highly customizable using
[Helm configuration values](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing).
Each value has a default tuned for an optimal getting started experience
with Vault. Before going into production, please review the parameters below
and consider if they're appropriate for your deployment.
- `global` - These global values affect multiple components of the chart.
- `enabled` (`boolean: true`) - The master enabled/disabled configuration. If this is true, most components will be installed by default. If this is false, no components will be installed by default and manually opting-in is required, such as by setting `server.enabled` to true.
- `namespace` (`string: ""`) - The namespace to deploy to. Defaults to the `helm` installation namespace.
- `imagePullSecrets` (`array: []`) - References secrets to be used when pulling images from private registries. See [Pull an Image from a Private Registry](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) for more details. May be specified as an array of name map entries or just as an array of names:
```yaml
imagePullSecrets:
- name: image-pull-secret
# or
imagePullSecrets:
- image-pull-secret
```
- `tlsDisable` (`boolean: true`) - When set to `true`, changes URLs from `https` to `http` (such as the `VAULT_ADDR=http://127.0.0.1:8200` environment variable set on the Vault pods).
- `externalVaultAddr` (`string: ""`) - External vault server address for the injector and CSI provider to use. Setting this will disable deployment of a vault server. A service account with token review permissions is automatically created if `server.serviceAccount.create=true` is set for the external Vault server to use.
- `openshift` (`boolean: false`) - If `true`, enables configuration specific to OpenShift such as NetworkPolicy, SecurityContext, and Route.
- `psp` - Values that configure Pod Security Policy.
- `enable` (`boolean: false`) - When set to `true`, enables Pod Security Policies for Vault and Vault Agent Injector.
- `annotations` (`dictionary: {}`) - This value defines additional annotations to
add to the Pod Security Policies. This can either be YAML or a YAML-formatted
multi-line templated string.
```yaml
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default,runtime/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
# or
annotations: |
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default,runtime/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
```
- `serverTelemetry` - Values that configure metrics and telemetry
- `prometheusOperator` (`boolean: false`) - When set to `true`, enables integration with the
Prometheus Operator. Be sure to configure the top-level [`serverTelemetry`](/vault/docs/platform/k8s/helm/configuration#servertelemetry-1) section for more details
and required configuration values.
- `injector` - Values that configure running a Vault Agent Injector Admission Webhook Controller within Kubernetes.
- `enabled` (`boolean or string: "-"`) - When set to `true`, the Vault Agent Injector Admission Webhook controller will be created. When set to `"-"`, defaults to the value of `global.enabled`.
- `externalVaultAddr` (`string: ""`) - Deprecated: Please use [global.externalVaultAddr](/vault/docs/platform/k8s/helm/configuration#externalvaultaddr) instead.
- `replicas` (`int: 1`) - The number of pods to deploy to create a highly available cluster of Vault Agent Injectors. Requires Vault K8s 0.7.0 to have more than 1 replica.
- `leaderElector` - Values that configure the Vault Agent Injector leader election for HA deployments.
- `enabled` (`boolean: true`) - When set to `true`, enables leader election for Vault Agent Injector. This is required when using auto-tls and more than 1 replica.
- `image` - Values that configure the Vault Agent Injector Docker image.
- `repository` (`string: "hashicorp/vault-k8s"`) - The name of the Docker image for Vault Agent Injector.
- `tag` (`string: "1.5.0"`) - The tag of the Docker image for the Vault Agent Injector. **This should be pinned to a specific version when running in production.** Otherwise, other changes to the chart may inadvertently upgrade your admission controller.
- `pullPolicy` (`string: "IfNotPresent"`) - The pull policy for container images. The default pull policy is `IfNotPresent` which causes the Kubelet to skip pulling an image if it already exists.
- `agentImage` - Values that configure the Vault Agent sidecar image.
- `repository` (`string: "hashicorp/vault"`) - The name of the Docker image for the Vault Agent sidecar. This should be set to the official Vault Docker image.
- `tag` (`string: "1.18.1"`) - The tag of the Vault Docker image to use for the Vault Agent Sidecar. **Vault 1.3.1+ is required by the admission controller**.
- `agentDefaults` - Values that configure the injected Vault Agent containers default values.
- `cpuLimit` (`string: "500m"`) - The default CPU limit for injected Vault Agent containers.
- `cpuRequest` (`string: "250m"`) - The default CPU request for injected Vault Agent containers.
- `memLimit` (`string: "128Mi"`) - The default memory limit for injected Vault Agent containers.
- `memRequest` (`string: "64Mi"`) - The default memory request for injected Vault Agent containers.
- `ephemeralLimit` (`string: ""`) - The default ephemeral storage limit for injected Vault Agent containers.
- `ephemeralRequest` (`string: ""`) - The default ephemeral storage request for injected Vault Agent containers.
- `template` (`string: "map"`) - The default template type for rendered secrets if no custom templates are defined.
Possible values include `map` and `json`.
- `templateConfig` - Default values within Agent's [`template_config` stanza](/vault/docs/agent-and-proxy/agent/template).
- `exitOnRetryFailure` (`boolean: true`) - Controls whether Vault Agent exits after it has exhausted its number of template retry attempts due to failures.
- `staticSecretRenderInterval` (`string: ""`) - Configures how often Vault Agent Template should render non-leased secrets such as KV v2. See the [Vault Agent Templates documentation](/vault/docs/agent-and-proxy/agent/template#non-renewable-secrets) for more details.
- `metrics` - Values that configure the Vault Agent Injector metric exporter.
- `enabled` (`boolean: false`) - When set to `true`, the Vault Agent Injector exports Prometheus metrics at the `/metrics` path.
- `authPath` (`string: "auth/kubernetes"`) - Mount path of the Vault Kubernetes Auth Method.
- `logLevel` (`string: "info"`) - Configures the log verbosity of the injector. Supported log levels: trace, debug, error, warn, info.
- `logFormat` (`string: "standard"`) - Configures the log format of the injector. Supported log formats: "standard", "json".
- `revokeOnShutdown` (`boolean: false`) - Configures all Vault Agent sidecars to revoke their token when shutting down.
- `securityContext` - Security context for the pod template and the injector container
- `pod` (`dictionary: {}`) - Defines the securityContext for the injector Pod, as YAML or a YAML-formatted multi-line templated string. Default if not specified:
```yaml
runAsNonRoot: true
runAsGroup:
runAsUser:
fsGroup:
```
- `container` (`dictionary: {}`) - Defines the securityContext for the injector container, as YAML or a YAML-formatted multi-line templated string. Default if not specified:
```yaml
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
```
- `resources` (`dictionary: {}`) - The resource requests and limits (CPU, memory, etc.) for each container of the injector. This should be a YAML dictionary of a Kubernetes [resource](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) object. If this isn't specified, then the pods won't request any specific amount of resources, which limits the ability for Kubernetes to make efficient use of compute resources.<br /> **Setting this is highly recommended.**
```yaml
resources:
requests:
memory: '256Mi'
cpu: '250m'
limits:
memory: '256Mi'
cpu: '250m'
```
- `webhook` - Values that control the Mutating Webhook Configuration.
- `failurePolicy` (`string: "Ignore"`) - Configures failurePolicy of the webhook. To block pod creation while the webhook is unavailable, set the policy to `"Fail"`. See [Failure Policy](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#failure-policy).
- `matchPolicy` (`string: "Exact"`) - Specifies the approach to accepting changes based on the rules of the MutatingWebhookConfiguration. See [Match Policy](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy).
- `timeoutSeconds` (`int: 30`) - Specifies the number of seconds before the webhook request will be ignored or fails. If it is ignored or fails depends on the `failurePolicy`. See [timeouts](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#timeouts).
- `namespaceSelector` (`object: {}`) - The selector used by the admission webhook controller to limit what namespaces where injection can happen. If unset, all non-system namespaces are eligible for injection. See [Matching requests: namespace selector](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-namespaceselector).
```yaml
namespaceSelector:
matchLabels:
sidecar-injector: enabled
```
- `objectSelector` (`object: {}`) - The selector used by the admission webhook controller to limit what objects can be affected by mutation. See [Matching requests: object selector](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-objectselector).
```yaml
objectSelector:
matchLabels:
sidecar-injector: enabled
```
- `annotations` (`string or object: {}`) - Defines additional annotations to attach to the webhook. This can either be YAML or a YAML-formatted multi-line templated string.
- `namespaceSelector` (`dictionary: {}`) - Deprecated: please use [`webhook.namespaceSelector`](/vault/docs/platform/k8s/helm/configuration#namespaceselector) instead.
- `objectSelector` (`dictionary: {}`) - Deprecated: please use [`webhook.objectSelector`](/vault/docs/platform/k8s/helm/configuration#objectselector) instead.
- `extraLabels` (`dictionary: {}`) - This value defines additional labels for Vault Agent Injector pods.
```yaml
extraLabels:
'sample/label1': 'foo'
'sample/label2': 'bar'
```
- `certs` - The certs section configures how the webhook TLS certs are configured. These are the TLS certs for the Kube apiserver communicating to the webhook. By default, the injector will generate and manage its own certs, but this requires the ability for the injector to update its own `MutatingWebhookConfiguration`. In a production environment, custom certs should probably be used. Configure the values below to enable this.
- `secretName` (`string: ""`) - secretName is the name of the Kubernetes secret that has the TLS certificate and private key to serve the injector webhook. If this is null, then the injector will default to its automatic management mode.
- `caBundle` (`string: ""`) - The PEM-encoded CA public certificate bundle for the TLS certificate served by the injector. This must be specified as a string and can't come from a secret because it must be statically configured on the Kubernetes `MutatingAdmissionWebhook` resource. This only needs to be specified if `secretName` is not null.
- `certName` (`string: "tls.crt"`) - The name of the certificate file within the `secretName` secret.
- `keyName` (`string: "tls.key"`) - The name of the key file within the `secretName` secret.
- `extraEnvironmentVars` (`dictionary: {}`) - Extra environment variables to set in the injector deployment.
```yaml
# Example setting injector TLS options in a deployment:
extraEnvironmentVars:
AGENT_INJECT_TLS_MIN_VERSION: tls13
AGENT_INJECT_TLS_CIPHER_SUITES: ...
```
- `affinity` - This value defines the [affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) for Vault Agent Injector pods. This can either be multi-line string or YAML matching the PodSpec's affinity field. It defaults to allowing only a single pod on each node, which minimizes risk of the cluster becoming unusable if a node is lost. If you need to run more pods per node (for example, testing on Minikube), set this value to `null`.
```yaml
# Recommended default server affinity:
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: -agent-injector
app.kubernetes.io/instance: ""
component: webhook
topologyKey: kubernetes.io/hostname
```
- `topologySpreadConstraints` (`array: []`) - [Topology settings](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
for injector pods. This can either be YAML or a YAML-formatted multi-line templated string.
- `tolerations` (`array: []`) - Toleration Settings for injector pods. This should be either a multi-line string or YAML matching the Toleration array.
- `nodeSelector` (`dictionary: {}`) - nodeSelector labels for injector pod assignment, formatted as a muli-line string or YAML map.
- `priorityClassName` (`string: ""`) - Priority class for injector pods
- `annotations` (`dictionary: {}`) - This value defines additional annotations for injector pods. This can either be YAML or a YAML-formatted multi-line templated string.
```yaml
annotations:
"sample/annotation1": "foo"
"sample/annotation2": "bar"
# or
annotations: |
"sample/annotation1": "foo"
"sample/annotation2": "bar"
```
- `failurePolicy` (`string: "Ignore"`) - Deprecated: please use [`webhook.failurePolicy`](/vault/docs/platform/k8s/helm/configuration#failurepolicy) instead.
- `webhookAnnotations` (`dictionary: {}`) - Deprecated: please use [`webhook.annotations`](/vault/docs/platform/k8s/helm/configuration#annotations-1) instead.
- `service` - The service section configures the Kubernetes service for the Vault Agent Injector.
- `annotations` (`dictionary: {}`) - This value defines additional annotations to
add to the Vault Agent Injector service. This can either be YAML or a YAML-formatted
multi-line templated string.
```yaml
annotations:
"sample/annotation1": "foo"
"sample/annotation2": "bar"
# or
annotations: |
"sample/annotation1": "foo"
"sample/annotation2": "bar"
```
- `serviceAccount` - Injector serviceAccount specific config
- `annotations` (`dictionary: {}`) - Extra annotations to attach to the injector serviceAccount. This can either be YAML or a YAML-formatted multi-line templated string.
- `hostNetwork` (`boolean: false`) - When set to true, configures the Vault Agent Injector to run on the host network. This is useful
when alternative cluster networking is used.
- `port` (`int: 8080`) - Configures the port the Vault Agent Injector listens on.
- `podDisruptionBudget` (`dictionary: {}`) - A disruption budget limits the number of pods of a replicated application that are down simultaneously from voluntary disruptions.
```yaml
podDisruptionBudget:
maxUnavailable: 1
```
- `strategy` (`dictionary: {}`) - Strategy for updating the deployment. This can be a multi-line string or a YAML map.
```yaml
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
# or
strategy: |
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
```
- `livenessProbe` - Values that configure the liveness probe for the injector.
- `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.
- `initialDelaySeconds` (`int: 60`) - Sets the initial delay of the liveness probe when the container starts.
- `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.
- `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.
- `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.
- `readinessProbe` - Values that configure the readiness probe for the injector.
- `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.
- `initialDelaySeconds` (`int: 60`) - Sets the initial delay of the readiness probe when the container starts.
- `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.
- `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.
- `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.
- `startupProbe` - Values that configure the startup probe for the injector.
- `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.
- `initialDelaySeconds` (`int: 60`) - Sets the initial delay of the startup probe when the container starts.
- `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.
- `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.
- `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.
- `server` - Values that configure running a Vault server within Kubernetes.
- `enabled` (`boolean or string: "-"`) - When set to `true`, the Vault server will be created. When set to `"-"`, defaults to the value of `global.enabled`.
- `enterpriseLicense` - This value refers to a Kubernetes secret that you have created that contains your enterprise license. If you are not using an enterprise image or if you plan to introduce the license key via another route, then leave secretName blank ("") or set it to null. Requires Vault Enterprise 1.8 or later.
- `secretName` (`string: ""`) - The name of the Kubernetes secret that holds the enterprise license. The secret must be in the same namespace that Vault is installed into.
- `secretKey` (`string: "license"`) - The key within the Kubernetes secret that holds the enterprise license.
- `image` - Values that configure the Vault Docker image.
- `repository` (`string: "hashicorp/vault"`) - The name of the Docker image for the containers running Vault.
- `tag` (`string: "1.18.1"`) - The tag of the Docker image for the containers running Vault. **This should be pinned to a specific version when running in production.** Otherwise, other changes to the chart may inadvertently upgrade your admission controller.
- `pullPolicy` (`string: "IfNotPresent"`) - The pull policy for container images. The default pull policy is `IfNotPresent` which causes the Kubelet to skip pulling an image if it already exists.
- `updateStrategyType` (`string: "OnDelete"`) - Configure the [Update Strategy Type](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies) for the StatefulSet.
- `logLevel` (`string: ""`) - Configures the Vault server logging verbosity. If set this will override values defined in the Vault configuration file.
Supported log levels include: `trace`, `debug`, `info`, `warn`, `error`.
- `logFormat` (`string: ""`) - Configures the Vault server logging format. If set this will override values defined in the Vault configuration file.
Supported log formats include: `standard`, `json`.
- `resources` (`dictionary: {}`) - The resource requests and limits (CPU, memory, etc.) for each container of the server. This should be a YAML dictionary of a Kubernetes [resource](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) object. If this isn't specified, then the pods won't request any specific amount of resources, which limits the ability for Kubernetes to make efficient use of compute resources. **Setting this is highly recommended.**
```yaml
resources:
requests:
memory: '10Gi'
limits:
memory: '10Gi'
```
- `ingress` - Values that configure Ingress services for Vault.
~> If deploying on OpenShift, these ingress settings are ignored. Use the [`route`](#route) configuration to expose Vault on OpenShift. <br/> <br/>
If [`ha`](#ha) is enabled the Ingress will point to the active vault server via the `active` Service. This requires vault 1.4+ and [service_registration](/vault/docs/configuration/service-registration/kubernetes) to be set in the vault config.
- `enabled` (`boolean: false`) - When set to `true`, an [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) service will be created.
- `labels` (`dictionary: {}`) - Labels for the ingress service.
- `annotations` (`dictionary: {}`) - This value defines additional annotations to
add to the Ingress service. This can either be YAML or a YAML-formatted
multi-line templated string.
```yaml
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
# or
annotations: |
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
```
- `ingressClassName` (`string: ""`) - Specify the [IngressClass](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class) that should be used to implement the Ingress
- `activeService` (`boolean: true`) - When HA mode is enabled and K8s service registration is being used, configure the ingress to point to the Vault active service.
- `extraPaths` (`array: []`) - Configures extra paths to prepend to the host configuration.
This is useful when working with annotation based services.
```yaml
extraPaths:
- path: /*
backend:
service:
name: ssl-redirect
port:
number: use-annotation
```
- `tls` (`array: []`) - Configures the TLS portion of the [Ingress spec](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls), where `hosts` is a list of the hosts defined in the Common Name of the TLS certificate, and `secretName` is the name of the Secret containing the required TLS files such as certificates and keys.
```yaml
tls:
- hosts:
- sslexample.foo.com
- sslexample.bar.com
secretName: testsecret-tls
```
- `hosts` - Values that configure the Ingress host rules.
- `host` (`string: "chart-example.local"`): Name of the host to use for Ingress.
- `paths` (`array: []`): Deprecated: `server.ingress.extraPaths` should be used instead. A list of paths that will be directed to the Vault service. At least one path is required.
```yaml
paths:
- /
- /vault
```
- `hostAliases` (`array: []`) - A list of aliases to be added to `/etc/hosts`. Specified as a YAML list following the [hostAlias format](https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/)
- `route` - Values that configure Route services for Vault in OpenShift
~> If [`ha`](#ha) is enabled the Route will point to the active vault server via the `active` Service (requires vault 1.4+ and [service_registration](/vault/docs/configuration/service-registration/kubernetes) to be set in the vault config).
- `enabled` (`boolean: false`) - When set to `true`, a Route for Vault will be created.
- `activeService` (`boolean: true`) - When HA mode is enabled and K8s service registration is being used, configure the route to point to the Vault active service.
- `labels` (`dictionary: {}`) - Labels for the Route
- `annotations` (`dictionary: {}`) - Annotations to add to the Route. This can either be YAML or a YAML-formatted multi-line templated string.
- `host` (`string: "chart-example.local"`) - Sets the hostname for the Route.
- `tls` (`dictionary: {termination: passthrough}`) - TLS config that will be passed directly to the route's TLS config, which can be used to configure other termination methods that terminate TLS at the router.
- `authDelegator` - Values that configure the Cluster Role Binding attached to the Vault service account.
- `enabled` (`boolean: true`) - When set to `true`, a Cluster Role Binding will be bound to the Vault service account. This Cluster Role Binding has the necessary privileges for Vault to use the [Kubernetes Auth Method](/vault/docs/auth/kubernetes).
- `readinessProbe` - Values that configure the readiness probe for the Vault pods.
- `enabled` (`boolean: true`) - When set to `true`, a readiness probe will be applied to the Vault pods.
- `path` (`string: ""`) - When set to a value, enables HTTP/HTTPS probes instead of using the default `exec` probe. The http/https scheme is controlled by the `tlsDisable` value.
- `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.
- `initialDelaySeconds` (`int: 5`) - When set to a value, configures the number of seconds after the container has started before probe initiates.
- `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.
- `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.
- `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.
- `port` (`int: 8200`) - When set to a value, overrides the default port used for the server readiness probe.
```yaml
readinessProbe:
enabled: true
path: /v1/sys/health?standbyok=true
failureThreshold: 2
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
port: 8200
```
- `livenessProbe` - Values that configure the liveness probe for the Vault pods.
- `enabled` (`boolean: false`) - When set to `true`, a liveness probe will be applied to the Vault pods.
- `execCommand` (`array: []`) - Used to define a liveness exec command. If provided, exec is preferred to httpGet (path) as the livenessProbe handler.
```yaml
execCommand:
- /bin/sh
- -c
- /vault/userconfig/mylivenessscript/run.sh
```
- `path` (`string: "/v1/sys/health?standbyok=true"`) - Path for the livenessProbe to use httpGet as the livenessProbe handler. The http/https scheme is controlled by the `tlsDisable` value.
- `initialDelaySeconds` (`int: 60`) - Sets the initial delay of the liveness probe when the container starts.
- `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.
- `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.
- `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.
- `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.
- `port` (`int: 8200`) - Port number on which livenessProbe will be checked if httpGet is used as the livenessProbe handler.
```yaml
livenessProbe:
enabled: true
path: /v1/sys/health?standbyok=true
initialDelaySeconds: 60
failureThreshold: 2
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
port: 8200
```
- `terminationGracePeriodSeconds` (`int: 10`) - Optional duration in seconds the pod needs to terminate gracefully. See: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
- `preStopSleepSeconds` (`int: 5`) - Used to set the sleep time during the preStop step.
- `postStart` (`array: []`) - Used to define commands to run after the pod is ready. This can be used to automate processes such as initialization or bootstrapping auth methods.
```yaml
postStart:
- /bin/sh
- -c
- /vault/userconfig/myscript/run.sh
```
- `extraInitContainers` (`array: null`) - extraInitContainers is a list of init containers. Specified as a YAML list. This is useful if you need to run a script to provision TLS certificates or write out configuration files in a dynamic way.
- `extraContainers` (`array: null`) - The extra containers to be applied to the Vault server pods.
```yaml
extraContainers:
- name: mycontainer
image: 'app:0.0.0'
env: ...
```
- `extraEnvironmentVars` (`dictionary: {}`) - The extra environment variables to be applied to the Vault server.
```yaml
# Extra Environment Variables are defined as key/value strings.
extraEnvironmentVars:
GOOGLE_REGION: global
GOOGLE_PROJECT: myproject
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/myproject/myproject-creds.json
```
- `shareProcessNamespace` (`boolean: false`) - Enables process namespace sharing between Vault and the extraContainers. This is useful if Vault must be signaled, e.g. to send a SIGHUP for log rotation.
- `extraArgs` (`string: null`) - The extra arguments to be applied to the Vault server startup command.
```yaml
extraArgs: '-config=/path/to/extra/config.hcl -log-format=json'
```
- `extraPorts` (`array: []`) - additional ports to add to the server statefulset
```yaml
extraPorts:
- containerPort: 8300
name: http-monitoring
```
- `extraSecretEnvironmentVars` (`array: []`) - The extra environment variables populated from a secret to be applied to the Vault server.
- `envName` (`string: required`) -
Name of the environment variable to be populated in the Vault container.
- `secretName` (`string: required`) -
Name of Kubernetes secret used to populate the environment variable defined by `envName`.
- `secretKey` (`string: required`) -
Name of the key where the requested secret value is located in the Kubernetes secret.
```yaml
# Extra Environment Variables populated from a secret.
extraSecretEnvironmentVars:
- envName: AWS_SECRET_ACCESS_KEY
secretName: vault
secretKey: AWS_SECRET_ACCESS_KEY
```
- `extraVolumes` (`array: []`) - Deprecated: please use `volumes` instead. A list of extra volumes to mount to Vault servers. This is useful for bringing in extra data that can be referenced by other configurations at a well known path, such as TLS certificates. The value of this should be a list of objects. Each object supports the following keys:
- `type` (`string: required`) -
Type of the volume, must be one of "configMap" or "secret". Case sensitive.
- `name` (`string: required`) -
Name of the configMap or secret to be mounted. This also controls the path
that it is mounted to. The volume will be mounted to `/vault/userconfig/<name>` by default
unless `path` is configured.
- `path` (`string: /vault/userconfigs`) -
Name of the path where a configMap or secret is mounted. If not specified
the volume will be mounted to `/vault/userconfig/<name of volume>`.
- `defaultMode` (`string: "420"`) -
Default mode of the mounted files.
```yaml
extraVolumes:
- type: 'secret'
name: 'vault-certs'
path: '/etc/pki'
```
- `volumes` (`array: null`) - A list of volumes made available to all containers. This takes
standard Kubernetes volume definitions.
```yaml
volumes:
- name: plugins
emptyDir: {}
```
- `volumeMounts` (`array: null`) - A list of volumes mounts made available to all containers. This takes
standard Kubernetes volume definitions.
```yaml
volumeMounts:
- mountPath: /usr/local/libexec/vault
name: plugins
readOnly: true
```
- `affinity` - This value defines the [affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) for server pods. This should be either a multi-line string or YAML matching the PodSpec's affinity field. It defaults to allowing only a single pod on each node, which minimizes risk of the cluster becoming unusable if a node is lost. If you need to run more pods per node (for example, testing on Minikube), set this value to `null`.
```yaml
# Recommended default server affinity:
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name:
app.kubernetes.io/instance: ""
component: server
topologyKey: kubernetes.io/hostname
```
- `topologySpreadConstraints` (`array: []`) - [Topology settings](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
for server pods. This can either be YAML or a YAML-formatted multi-line templated string.
- `tolerations` (`array: []`) - This value defines the [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) that are acceptable when being scheduled. This should be either a multi-line string or YAML matching the Toleration array in a PodSpec.
```yaml
tolerations: |
- key: 'node.kubernetes.io/unreachable'
operator: 'Exists'
effect: 'NoExecute'
tolerationSeconds: 6000
```
- `nodeSelector` (`dictionary: {}`) - This value defines additional node selection criteria for more control over where the Vault servers are deployed. This should be formatted as a multi-line string or YAML map.
```yaml
nodeSelector: |
disktype: ssd
```
- `networkPolicy` - Values that configure the Vault Network Policy.
- `enabled` (`boolean: false`) - When set to `true`, enables a Network Policy for the Vault cluster.
- `egress` (`array: []`) - This value configures the [egress](https://kubernetes.io/docs/concepts/services-networking/network-policies/) network policy rules.
```yaml
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 8200
```
- `ingress` (`array: []`) - This value configures the [ingress](https://kubernetes.io/docs/concepts/services-networking/network-policies/) network policy rules. The default is below:
```yaml
ingress:
- from:
- namespaceSelector: {}
ports:
- port: 8200
protocol: TCP
- port: 8201
protocol: TCP
```
- `priorityClassName` (`string: ""`) - Priority class for server pods
- `extraLabels` (`dictionary: {}`) - This value defines additional labels for server pods.
```yaml
extraLabels:
'sample/label1': 'foo'
'sample/label2': 'bar'
```
- `annotations` (`dictionary: {}`) - This value defines additional annotations for server pods. This can either be YAML or a YAML-formatted multi-line templated string.
```yaml
annotations:
"sample/annotation1": "foo"
"sample/annotation2": "bar"
# or
annotations: |
"sample/annotation1": "foo"
"sample/annotation2": "bar"
```
- `includeConfigAnnotation` (`boolean: false`) - Add an annotation to the server configmap and the statefulset pods, `vaultproject.io/config-checksum`, that is a hash of the Vault configuration. This can be used together with an OnDelete deployment strategy to help identify which pods still need to be deleted during a deployment to pick up any configuration changes.
- `service` - Values that configure the Kubernetes service created for Vault. These options are also used for the `active` and `standby` services when [`ha`](#ha) is enabled.
- `enabled` (`boolean: true`) - When set to `true`, a Kubernetes service will be created for Vault.
- `active` - Values that apply only to the vault-active service.
- `enabled` (`boolean: true`) - When set to `true`, the vault-active Kubernetes service will be created for Vault, selecting pods which label themselves as the cluster leader with `vault-active: "true"`.
- `annotations` (`dictionary: {}`) - Extra annotations for the active service definition. This can either be YAML or a YAML-formatted multi-line templated string.
- `standby` - Values that apply only to the vault-standby service.
- `enabled` (`boolean: true`) - When set to `true`, the vault-standby Kubernetes service will be created for Vault, selecting pods which label themselves as a cluster follower with `vault-active: "false"`.
- `annotations` (`dictionary: {}`) - Extra annotations for the standby service definition. This can either be YAML or a YAML-formatted multi-line templated string.
- `clusterIP` (`string`) - ClusterIP controls whether an IP address (cluster IP) is attached to the Vault service within Kubernetes. By default the Vault service will be given a Cluster IP address, set to `None` to disable. When disabled Kubernetes will create a "headless" service. Headless services can be used to communicate with pods directly through DNS instead of a round robin load balancer.
- `type` (`string: "ClusterIP"`) - Sets the type of service to create, such as `NodePort`.
- `externalTrafficPolicy` (`string: "Cluster"`) - The [externalTrafficPolicy](https://kubernetes.io/docs/concepts/services-networking/service/#external-traffic-policy) can be set to either Cluster or Local and is only valid for LoadBalancer and NodePort service types.
- `port` (`int: 8200`) - Port on which Vault server is listening inside the pod.
- `targetPort` (`int: 8200`) - Port on which the service is listening.
- `nodePort` (`int:`) - When type is set to `NodePort`, the bound node port can be configured using this value. A random port will be assigned if this is left blank.
- `activeNodePort` (`int:`) - (When HA mode is enabled) If type is set to "NodePort", a specific nodePort value can be configured for the `active` service, and will be random if left blank.
- `standbyNodePort` (`int:`) - (When HA mode is enabled) If type is set to "NodePort", a specific nodePort value can be configured for the `standby` service, will be random if left blank.
- `publishNotReadyAddresses` (`boolean: true`) - If true, do not wait for pods to be ready before including them in the services' targets. Does not apply to the headless service, which is used for cluster-internal communication.
- `instanceSelector`
- `enabled` (`boolean: true`) - When set to false, the service selector used for the vault, vault-active, and vault-standby services will not filter on `app.kubernetes.io/instance`. This means they may select pods from outside this deployment of the Helm chart. Does not affect the headless vault-internal service with `ClusterIP: None`.
- `annotations` (`dictionary: {}`) - This value defines additional annotations for the service. This can either be YAML or a YAML-formatted multi-line templated string.
```yaml
annotations:
"sample/annotation1": "foo"
"sample/annotation2": "bar"
# or
annotations: |
"sample/annotation1": "foo"
"sample/annotation2": "bar"
```
- `ipFamilyPolicy` (`string: ""`) - The IP family and IP families options are to set the behaviour in a dual-stack environment. Omitting these values will let the service fall back to whatever the CNI dictates the defaults should be. These are only supported for kubernetes versions >=1.23. The service's [supported IP family policy](https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services), can be either: `SingleStack`, `PreferDualStack`, or `RequireDualStack`.
- `serviceIPFamilies` (`array: []`) - Sets the families that should be supported and the order in which they should be applied to ClusterIP as well. Can be IPv4 and/or IPv6.
- `serviceAccount` - Values that configure the Kubernetes service account created for Vault.
- `create` (`boolean: true`): If set to true, creates a service account used by Vault.
- `name` (`string: ""`): Name of the service account to use. If not set and create is true, a name is generated using the name of the installation (default is "vault").
- `createSecret` (`boolean: false`): Create a Kubernetes Secret object to store a non-expiring token for the service account. Prior to Kubernetes 1.24.0, Kubernetes used to generate this secret for each service account by default. Kubernetes recommends using short-lived tokens from the TokenRequest API or projected volumes instead if possible. For more details, see https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets. `server.serviceAccount.create` must be equal to 'true' in order to use this feature.
- `annotations` (`dictionary: {}`) - This value defines additional annotations for the service account. This can either be YAML or a YAML-formatted multi-line templated string.
```yaml
annotations:
"sample/annotation1": "foo"
"sample/annotation2": "bar"
# or
annotations: |
"sample/annotation1": "foo"
"sample/annotation2": "bar"
```
- `extraLabels` (`dictionary: {}`) - This value defines additional labels for the Vault Server service account.
```yaml
extraLabels:
'sample/label1': 'foo'
'sample/label2': 'bar'
```
- `serviceDiscovery` - Values that configure permissions required for Vault Server to automatically discover and join a Vault cluster using pod metadata.
- `enabled` (`boolean: true`) - Enable or disable a service account role binding with the permissions required for Vault's Kubernetes [`service_registration`](/vault/docs/configuration/service-registration/kubernetes) config option.
- `dataStorage` - This configures the volume used for storing Vault data when not using external storage such as Consul.
- `enabled` (`boolean: true`) -
Enables a persistent volume to be created for storing Vault data when not using an external storage service.
- `size` (`string: 10Gi`) -
Size of the volume to be created for Vault's data storage when not using an external storage service.
- `storageClass` (`string: null`) -
Name of the storage class to use when creating the data storage volume.
- `mountPath` (`string: /vault/data`) -
Configures the path in the Vault pod where the data storage will be mounted.
- `accessMode` (`string: ReadWriteOnce`) -
Type of access mode of the storage device. See the [official Kubernetes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) for more information.
- `annotations` (`dictionary: {}`) - This value defines additional annotations to
add to the data PVCs. This can either be YAML or a YAML-formatted
multi-line templated string.
```yaml
annotations:
kubernetes.io/my-pvc: foobar
# or
annotations: |
kubernetes.io/my-pvc: foobar
```
- `labels` (`dictionary: {}`) - This value defines additional labels to add to the
data PVCs. This can either be YAML or a YAML-formatted multi-line templated
string.
- `persistentVolumeClaimRetentionPolicy` (`dictionary: {}`) - Specifies the Persistent Volume Claim (PVC) [retention policy](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention).
```yaml
persistentVolumeClaimRetentionPolicy:
whenDeleted: Retain
whenScaled: Retain
```
- `auditStorage` - This configures the volume used for storing Vault's audit logs. See the [Vault documentation](/vault/docs/audit) for more information.
- `enabled` (`boolean: false`) -
Enables a persistent volume to be created for storing Vault's audit logs.
- `size` (`string: 10Gi`) -
Size of the volume to be created for Vault's audit logs.
- `storageClass` (`string: null`) -
Name of the storage class to use when creating the audit storage volume.
- `mountPath` (`string: /vault/audit`) -
Configures the path in the Vault pod where the audit storage will be mounted.
- `accessMode` (`string: ReadWriteOnce`) -
Type of access mode of the storage device.
- `annotations` (`dictionary: {}`) - This value defines additional annotations to
add to the audit PVCs. This can either be YAML or a YAML-formatted
multi-line templated string.
```yaml
annotations:
kubernetes.io/my-pvc: foobar
# or
annotations: |
kubernetes.io/my-pvc: foobar
```
- `labels` (`dictionary: {}`) - This value defines additional labels to add to the
audit PVCs. This can either be YAML or a YAML-formatted multi-line templated
string.
- `dev` - This configures `dev` mode for the Vault server.
- `enabled` (`boolean: false`) -
Enables `dev` mode for the Vault server. This mode is useful for experimenting with Vault without needing to unseal.
- `devRootToken` (`string: "root"`) - Configures the root token for the Vault development server.
~> **Security Warning:** Never, ever, ever run a "dev" mode server in production. It is insecure and will lose data on every restart (since it stores data in-memory). It is only made for development or experimentation.
- `standalone` - This configures `standalone` mode for the Vault server.
- `enabled` (`boolean: true`) -
Enables `standalone` mode for the Vault server. This mode uses the `file` storage backend and requires a volume for persistence (`dataStorage`).
- `config` (`string or object: "{}"`) -
A raw string of extra HCL or JSON [configuration](/vault/docs/configuration) for Vault servers.
This will be saved as-is into a ConfigMap that is read by the Vault servers.
This can be used to add additional configuration that isn't directly exposed by the chart.
If an object is provided, it will be written as JSON.
```yaml
# ExtraConfig values are formatted as a multi-line string:
config: |
api_addr = "http://POD_IP:8200"
listener "tcp" {
tls_disable = 1
address = "0.0.0.0:8200"
}
storage "file" {
path = "/vault/data"
}
```
This can also be set using Helm's `--set` flag (vault-helm v0.1.0 and later), using the following syntax:
```shell
--set server.standalone.config='{ listener "tcp" { address = "0.0.0.0:8200" }'
```
- `ha` - This configures `ha` mode for the Vault server.
- `enabled` (`boolean: false`) -
Enables `ha` mode for the Vault server. This mode uses a highly available backend storage (such as Consul) to store Vault's data. By default this is configured to use [Consul Helm](https://github.com/hashicorp/consul-k8s). For a complete list of storage backends, see the [Vault documentation](/vault/docs/configuration).
- `apiAddr`: (`string: "{}"`) -
Set the API address configuration for a Vault cluster. If set to an empty string, the pod IP address is used.
- `clusterAddr` (`string: null`) - Set the [`cluster_addr`](/vault/docs/configuration#cluster_addr) configuration for Vault HA.
If null, defaults to `https://$(HOSTNAME).-internal:8201`.
- `raft` - This configures `raft` integrated storage mode for the Vault server.
- `enabled` (`boolean: false`) -
Enables `raft` integrated storage mode for the Vault server. This mode uses persistent volumes for storage.
- `setNodeId` (`boolean: false`) - Set the Node Raft ID to the name of the pod.
- `config` (`string or object: "{}"`) -
A raw string of extra HCL or JSON [configuration](/vault/docs/configuration) for Vault servers.
This will be saved as-is into a ConfigMap that is read by the Vault servers.
This can be used to add additional configuration that isn't directly exposed by the chart.
If an object is provided, it will be written as JSON.
- `replicas` (`int: 3`) -
The number of pods to deploy to create a highly available cluster of Vault servers.
- `updatePartition` (`int: 0`) -
If an updatePartition is specified, all Pods with an ordinal that is greater than or equal to the partition will be updated when the StatefulSet’s `.spec.template` is updated. If set to `0`, this disables partition updates. For more information see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#rolling-updates).
- `config` (`string or object: "{}"`) -
A raw string of extra HCL or JSON [configuration](/vault/docs/configuration) for Vault servers.
This will be saved as-is into a ConfigMap that is read by the Vault servers.
This can be used to add additional configuration that isn't directly exposed by the chart.
If an object is provided, it will be written as JSON.
```yaml
# ExtraConfig values are formatted as a multi-line string:
config: |
ui = true
api_addr = "http://POD_IP:8200"
listener "tcp" {
tls_disable = 1
address = "0.0.0.0:8200"
}
storage "consul" {
path = "vault/"
address = "HOST_IP:8500"
}
```
This can also be set using Helm's `--set` flag (vault-helm v0.1.0 and later), using the following syntax:
```shell
--set server.ha.config='{ listener "tcp" { address = "0.0.0.0:8200" }'
```
- `disruptionBudget` - Values that configures the disruption budget policy. See the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) for more information.
- `enabled` (`boolean: true`) -
Enables disruption budget policy to limit the number of pods that are down simultaneously from voluntary disruptions.
- `maxUnavailable` (`int: null`) -
The maximum number of unavailable pods. By default, this will be automatically
computed based on the `server.replicas` value to be `(n/2)-1`. If you need to set
this to `0`, you will need to add a `--set 'server.disruptionBudget.maxUnavailable=0'`
flag to the helm chart installation command because of a limitation in the Helm
templating language.
- `statefulSet` - This configures settings for the Vault Statefulset.
- `annotations` (`dictionary: {}`) - This value defines additional annotations to
add to the Vault statefulset. This can either be YAML or a YAML-formatted
multi-line templated string.
```yaml
annotations:
kubernetes.io/my-statefulset: foobar
# or
annotations: |
kubernetes.io/my-statefulset: foobar
```
- `securityContext` - Set the Pod and container security contexts
- `pod` (`dictionary: {}`) - Defines the securityContext for the server Pods, as YAML or a YAML-formatted multi-line templated string.
Default if not specified and `global.openshift=false`:
```yaml
runAsNonRoot: true
runAsGroup:
runAsUser:
fsGroup:
```
Defaults to empty if not specified and `global.openshift=true`.
- `container` (`dictionary: {}`) - Defines the securityContext for the server containers, as YAML or a YAML-formatted multi-line templated string.
Default if not specified and `global.openshift=false`:
```yaml
allowPrivilegeEscalation: false
```
Defaults to empty if not specified and `global.openshift=true`.
- `ui` - Values that configure the Vault UI.
- `enabled` (`boolean: false`) - If true, the UI will be enabled. The UI will only be enabled on Vault servers. If `server.enabled` is false, then this setting has no effect. To expose the UI in some way, you must configure `ui.service`.
- `serviceType` (`string: ClusterIP`) - The service type to register. This defaults to `ClusterIP`.
The available service types are documented on
[the Kubernetes website](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types).
- `publishNotReadyAddresses` (`boolean: true`) - If set to true, will route traffic to Vault pods that aren't ready (if they're sealed or uninitialized.
- `activeVaultPodOnly` (`boolean: false`) - If set to true, the UI service will only route to the active pod in a Vault HA cluster.
- `serviceNodePort` (`int: null`) - Sets the Node Port value when using `serviceType: NodePort` on the Vault UI service.
- `externalPort` (`int: 8200`) - Sets the external port value of the service.
- `targetPort` (`int: 8200`) - Sets the target port value of the service.
- `serviceIPFamilyPolicy` (`string: ""`) - The IP family and IP families options are to set the behaviour in a dual-stack environment. Omitting these values will let the service fall back to whatever the CNI dictates the defaults should be. These are only supported for kubernetes versions >=1.23. The service's [supported IP family policy](https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services), can be either: `SingleStack`, `PreferDualStack`, or `RequireDualStack`.
- `serviceIPFamilies` (`array: []`) - Sets the families that should be supported and the order in which they should be applied to ClusterIP as well. Can be IPv4 and/or IPv6.
- `externalTrafficPolicy` (`string: "Cluster"`) - The [externalTrafficPolicy](https://kubernetes.io/docs/concepts/services-networking/service/#external-traffic-policy) can be set to either Cluster or Local and is only valid for LoadBalancer and NodePort service types.
- `loadBalancerSourceRanges` (`array`) - This value defines additional source CIDRs when using `serviceType: LoadBalancer`.
```yaml
loadBalancerSourceRanges:
- 10.0.0.0/16
- 120.78.23.3/32
```
- `loadBalancerIP` (`string`) - This value defines the IP address of the load balancer when using `serviceType: LoadBalancer`.
- `annotations` (`dictionary: {}`) - This value defines additional annotations for the UI service. This can either be YAML or a YAML-formatted multi-line templated string.
```yaml
annotations:
"sample/annotation1": "foo"
"sample/annotation2": "bar"
# or
annotations: |
"sample/annotation1": "foo"
"sample/annotation2": "bar"
```
- `csi` - Values that configure running the Vault CSI Provider.
- `enabled` (`boolean: false`) - When set to `true`, the Vault CSI Provider daemonset will be created.
- `image` - Values that configure the Vault CSI Provider Docker image.
- `repository` (`string: "hashicorp/vault-csi-provider"`) - The name of the Docker image for the Vault CSI Provider.
- `tag` (`string: "1.5.0"`) - The tag of the Docker image for the Vault CSI Provider.. **This should be pinned to a specific version when running in production.** Otherwise, other changes to the chart may inadvertently upgrade your CSI provider.
- `pullPolicy` (`string: "IfNotPresent"`) - The pull policy for container images. The default pull policy is `IfNotPresent` which causes the Kubelet to skip pulling an image if it already exists locally.
- `volumes` (`array: null`) - A list of volumes made available to all containers. This takes
standard Kubernetes volume definitions.
```yaml
volumes:
- name: plugins
emptyDir: {}
```
- `volumeMounts` (`array: null`) - A list of volumes mounts made available to all containers. This takes
standard Kubernetes volume mount definitions.
```yaml
volumeMounts:
- mountPath: /usr/local/libexec/vault
name: plugins
readOnly: true
```
- `resources` (`dictionary: {}`) - The resource requests and limits (CPU, memory, etc.) for each of the CSI containers. This should be a YAML dictionary of a Kubernetes [resource](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) object. If this isn't specified, then the pods won't request any specific amount of resources, which limits the ability for Kubernetes to make efficient use of compute resources.<br /> **Setting this is highly recommended.**
```yaml
resources:
requests:
memory: '10Gi'
limits:
memory: '10Gi'
```
- `hmacSecretName` (`string: ""`) - Override the default secret name for the CSI Provider's HMAC key used for generating secret versions.
- `hostNetwork` (`bool: false`) - Set the `hostNetwork` parameter on the CSI Provider pods to
avoid the need of a dedicated pod ip.
- `daemonSet` - Values that configure the Vault CSI Provider daemonSet.
- `updateStrategy` - Values that configure the Vault CSI Provider update strategy.
- `type` (`string: "RollingUpdate"`) - The [type of update strategy](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies) to be used when the daemonset is updated using Helm upgrades.
- `maxUnavailable` (`int: null`) - The maximum number of unavailable pods during an upgrade.
- `annotations` (`dictionary: {}`) - This value defines additional annotations to
add to the Vault CSI Provider daemonset. This can either be YAML or a YAML-formatted
multi-line templated string.
```yaml
annotations:
foo: bar
# or
annotations: |
foo: bar
```
- `extraLabels` (`dictionary: {}`) - This value defines additional labels for the CSI provider daemonset.
- `providersDir` (`string: "/etc/kubernetes/secrets-store-csi-providers"`) - Provider host path (must match the CSI provider's path)
- `kubeletRootDir` (`string: "/var/lib/kubelet"`) - Kubelet host path
- `securityContext` - Security context for the pod template and container in the csi provider daemonSet
- `pod` (`dictionary: {}`) - Pod-level securityContext. May be specified as YAML or a YAML-formatted multi-line templated string.
- `container` (`dictionary: {}`) - Container-level securityContext. May be specified as YAML or a YAML-formatted multi-line templated string.
- `pod` - Values that configure the Vault CSI Provider pod.
- `annotations` (`dictionary: {}`) - This value defines additional annotations to
add to the Vault CSI Provider pods. This can either be YAML or a YAML-formatted
multi-line templated string.
```yaml
annotations:
foo: bar
# or
annotations: |
foo: bar
```
- `extraLabels` (`dictionary: {}`) - This value defines additional labels for CSI provider pods.
- `nodeSelector` (`dictionary: {}`) - [nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) labels for csi pod assignment, formatted as a multi-line string or YAML map.
```yaml
nodeSelector:
beta.kubernetes.io/arch: amd64
```
- `affinity` (`dictionary: {}`) - This should be either a multi-line string or YAML matching the PodSpec's affinity field.
- `tolerations` (`array: []`) - Toleration Settings for CSI pods. This should be a multi-line string or YAML matching the Toleration array in a PodSpec.
- `priorityClassName` (`string: ""`) - Priority class for CSI Provider pods
- `serviceAccount` - Values that configure the Vault CSI Provider's serviceaccount.
- `annotations` (`dictionary: {}`) - This value defines additional
annotations for the serviceAccount definition. This can either be YAML or
a YAML-formatted multi-line templated string.
```yaml
annotations:
foo: bar
# or
annotations: |
foo: bar
```
- `extraLabels` (`dictionary: {}`) - This value defines additional labels for the CSI provider service account.
- `readinessProbe` - Values that configure the readiness probe for the Vault CSI Provider pods.
- `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.
- `initialDelaySeconds` (`int: 5`) - When set to a value, configures the number of seconds after the container has started before probe initiates.
- `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.
- `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.
- `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.
- `livenessProbe` - Values that configure the liveness probe for the Vault CSI Provider pods.
- `initialDelaySeconds` (`int: 5`) - Sets the initial delay of the liveness probe when the container starts.
- `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.
- `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.
- `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.
- `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.
- `logLevel` (`string: "info"`) - Configures the log level for the Vault CSI provider. Supported
log levels include: `trace`, `debug`, `info`, `warn`, `error`, and `off`.
- `debug` (`bool: false`) - Deprecated: set `logLevel` to `debug` instead. When set to true,
enables debug logging on the Vault CSI Provider daemonset.
- `extraArgs` (`array: []`) - The extra arguments to be applied to the CSI pod startup command. See [here](/vault/docs/platform/k8s/csi/configurations#command-line-arguments) for available flags.
- `agent` - Configures the Vault Agent sidecar for the CSI Provider
- `enabled` (`bool: true`) - whether to enable the agent sidecar for the CSI provider
- `extraArgs` (`array: []`) - The extra arguments to be applied to the agent startup command.
- `image` - Values that configure the Vault Agent sidecar image for the CSI Provider.
- `pullPolicy` (`string: "IfNotPresent"`) - The pull policy for agent image. The default pull policy is `IfNotPresent` which causes the Kubelet to skip pulling an image if it already exists.
- `repository` (`string: "hashicorp/vault"`) - The name of the Docker image for the Vault Agent sidecar. This should be set to the official Vault Docker image.
- `tag` (`string: "1.18.1"`) - The tag of the Vault Docker image to use for the Vault Agent Sidecar.
- `logFormat` (`string: "standard"`) -
- `logLevel` (`string: "info"`) -
- `resources` (`dictionary: {}`) - The resource requests and limits (CPU, memory, etc.) for the agent. This should be a YAML dictionary of a Kubernetes [resource](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) object.
```yaml
resources:
requests:
memory: '256Mi'
cpu: '250m'
limits:
memory: '256Mi'
cpu: '250m'
```
- `serverTelemetry` - Values the configure metrics and telemetry. Enabling these features requires setting
the `telemetry {}` stanza in the Vault configuration. See the [telemetry](/vault/docs/configuration/telemetry)
[docs](/vault/docs/internals/telemetry) for more on the Vault configuration.
If authorization is not set for authenticating to Vault's metrics endpoint,
the following Vault server `telemetry{}` config must be included in the
`listener "tcp"{}` stanza of the Vault configuration:
```yaml
listener "tcp" {
tls_disable = 1
address = "0.0.0.0:8200"
telemetry {
unauthenticated_metrics_access = "true"
}
}
```
In addition, a top level `telemetry {}` stanza must also be included in the Vault configuration, such as:
```yaml
telemetry {
prometheus_retention_time = "30s",
disable_hostname = true
}
```
- `serviceMonitor` - Values that configure monitoring the Vault server
- `enabled` (`boolean: false`) - When set to `true`, enable deployment of the Vault Server
ServiceMonitor CustomResource. The Prometheus operator *must* be installed before enabling this
feature. If not, the chart will fail to install due to missing CustomResourceDefinitions provided by
the operator.
Instructions on how to install the Helm chart can be found [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack).
More information can be found here in the
[these](https://github.com/prometheus-operator/prometheus-operator)
[repositories](https://github.com/prometheus-operator/kube-prometheus)
- `selectors` (`dictionary: {}`) - Selector labels to add to the ServiceMonitor.
- `interval` (`string: "30s"`) - Interval at which Prometheus scrapes metrics.
- `scrapeTimeout` (`string: "10s"`) - Timeout for Prometheus scrapes.
- `tlsConfig` (`dictionary: {}`) - tlsConfig used for scraping the Vault metrics API. See the
prometheus [API
reference](https://prometheus-operator.dev/docs/api-reference/api/#monitoring.coreos.com/v1.TLSConfig)
for more details.
```yaml
tlsConfig:
ca:
secret:
name: vault-metrics-client
key: ca.crt
```
- `authorization` (`dictionary: {}`) - Authorization used for scraping the Vault metrics API.
See the prometheus [API
reference](https://prometheus-operator.dev/docs/api-reference/api/#monitoring.coreos.com/v1.SafeAuthorization)
for more details.
```yaml
authorization:
credentials:
name: vault-metrics-client
key: token
```
- `prometheusRules` - Values that configure Prometheus rules.
- `enabled` (`boolean: false`) - Deploy the PrometheusRule custom resource for AlertManager-based
alerts. Requires that AlertManager is properly deployed.
- `selectors` (`dictionary: {}`) - Selector labels to add to the Prometheus rules.
- `rules`: (`array: []`) - Prometheus rules to create.
For example:
```yaml
rules:
- alert: vault-HighResponseTime
annotations:
message: The response time of Vault is over 500ms on average over the last 5 minutes.
expr: vault_core_handle_request{quantile="0.5", namespace="mynamespace"} > 500
for: 5m
labels:
severity: warning
- alert: vault-HighResponseTime
annotations:
message: The response time of Vault is over 1s on average over the last 5 minutes.
expr: vault_core_handle_request{quantile="0.5", namespace="mynamespace"} > 1000
for: 5m
labels:
severity: critical
``` | vault | layout docs page title Configuration description This section documents configuration options for the Vault Helm chart Configuration include helm version mdx The chart is highly customizable using Helm configuration values https helm sh docs intro using helm customizing the chart before installing Each value has a default tuned for an optimal getting started experience with Vault Before going into production please review the parameters below and consider if they re appropriate for your deployment global These global values affect multiple components of the chart enabled boolean true The master enabled disabled configuration If this is true most components will be installed by default If this is false no components will be installed by default and manually opting in is required such as by setting server enabled to true namespace string The namespace to deploy to Defaults to the helm installation namespace imagePullSecrets array References secrets to be used when pulling images from private registries See Pull an Image from a Private Registry https kubernetes io docs tasks configure pod container pull image private registry for more details May be specified as an array of name map entries or just as an array of names yaml imagePullSecrets name image pull secret or imagePullSecrets image pull secret tlsDisable boolean true When set to true changes URLs from https to http such as the VAULT ADDR http 127 0 0 1 8200 environment variable set on the Vault pods externalVaultAddr string External vault server address for the injector and CSI provider to use Setting this will disable deployment of a vault server A service account with token review permissions is automatically created if server serviceAccount create true is set for the external Vault server to use openshift boolean false If true enables configuration specific to OpenShift such as NetworkPolicy SecurityContext and Route psp Values that configure Pod Security Policy enable boolean false When set to true enables Pod Security Policies for Vault and Vault Agent Injector annotations dictionary This value defines additional annotations to add to the Pod Security Policies This can either be YAML or a YAML formatted multi line templated string yaml annotations seccomp security alpha kubernetes io allowedProfileNames docker default runtime default apparmor security beta kubernetes io allowedProfileNames runtime default seccomp security alpha kubernetes io defaultProfileName runtime default apparmor security beta kubernetes io defaultProfileName runtime default or annotations seccomp security alpha kubernetes io allowedProfileNames docker default runtime default apparmor security beta kubernetes io allowedProfileNames runtime default seccomp security alpha kubernetes io defaultProfileName runtime default apparmor security beta kubernetes io defaultProfileName runtime default serverTelemetry Values that configure metrics and telemetry prometheusOperator boolean false When set to true enables integration with the Prometheus Operator Be sure to configure the top level serverTelemetry vault docs platform k8s helm configuration servertelemetry 1 section for more details and required configuration values injector Values that configure running a Vault Agent Injector Admission Webhook Controller within Kubernetes enabled boolean or string When set to true the Vault Agent Injector Admission Webhook controller will be created When set to defaults to the value of global enabled externalVaultAddr string Deprecated Please use global externalVaultAddr vault docs platform k8s helm configuration externalvaultaddr instead replicas int 1 The number of pods to deploy to create a highly available cluster of Vault Agent Injectors Requires Vault K8s 0 7 0 to have more than 1 replica leaderElector Values that configure the Vault Agent Injector leader election for HA deployments enabled boolean true When set to true enables leader election for Vault Agent Injector This is required when using auto tls and more than 1 replica image Values that configure the Vault Agent Injector Docker image repository string hashicorp vault k8s The name of the Docker image for Vault Agent Injector tag string 1 5 0 The tag of the Docker image for the Vault Agent Injector This should be pinned to a specific version when running in production Otherwise other changes to the chart may inadvertently upgrade your admission controller pullPolicy string IfNotPresent The pull policy for container images The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists agentImage Values that configure the Vault Agent sidecar image repository string hashicorp vault The name of the Docker image for the Vault Agent sidecar This should be set to the official Vault Docker image tag string 1 18 1 The tag of the Vault Docker image to use for the Vault Agent Sidecar Vault 1 3 1 is required by the admission controller agentDefaults Values that configure the injected Vault Agent containers default values cpuLimit string 500m The default CPU limit for injected Vault Agent containers cpuRequest string 250m The default CPU request for injected Vault Agent containers memLimit string 128Mi The default memory limit for injected Vault Agent containers memRequest string 64Mi The default memory request for injected Vault Agent containers ephemeralLimit string The default ephemeral storage limit for injected Vault Agent containers ephemeralRequest string The default ephemeral storage request for injected Vault Agent containers template string map The default template type for rendered secrets if no custom templates are defined Possible values include map and json templateConfig Default values within Agent s template config stanza vault docs agent and proxy agent template exitOnRetryFailure boolean true Controls whether Vault Agent exits after it has exhausted its number of template retry attempts due to failures staticSecretRenderInterval string Configures how often Vault Agent Template should render non leased secrets such as KV v2 See the Vault Agent Templates documentation vault docs agent and proxy agent template non renewable secrets for more details metrics Values that configure the Vault Agent Injector metric exporter enabled boolean false When set to true the Vault Agent Injector exports Prometheus metrics at the metrics path authPath string auth kubernetes Mount path of the Vault Kubernetes Auth Method logLevel string info Configures the log verbosity of the injector Supported log levels trace debug error warn info logFormat string standard Configures the log format of the injector Supported log formats standard json revokeOnShutdown boolean false Configures all Vault Agent sidecars to revoke their token when shutting down securityContext Security context for the pod template and the injector container pod dictionary Defines the securityContext for the injector Pod as YAML or a YAML formatted multi line templated string Default if not specified yaml runAsNonRoot true runAsGroup runAsUser fsGroup container dictionary Defines the securityContext for the injector container as YAML or a YAML formatted multi line templated string Default if not specified yaml allowPrivilegeEscalation false capabilities drop ALL resources dictionary The resource requests and limits CPU memory etc for each container of the injector This should be a YAML dictionary of a Kubernetes resource https kubernetes io docs concepts configuration manage resources containers object If this isn t specified then the pods won t request any specific amount of resources which limits the ability for Kubernetes to make efficient use of compute resources br Setting this is highly recommended yaml resources requests memory 256Mi cpu 250m limits memory 256Mi cpu 250m webhook Values that control the Mutating Webhook Configuration failurePolicy string Ignore Configures failurePolicy of the webhook To block pod creation while the webhook is unavailable set the policy to Fail See Failure Policy https kubernetes io docs reference access authn authz extensible admission controllers failure policy matchPolicy string Exact Specifies the approach to accepting changes based on the rules of the MutatingWebhookConfiguration See Match Policy https kubernetes io docs reference access authn authz extensible admission controllers matching requests matchpolicy timeoutSeconds int 30 Specifies the number of seconds before the webhook request will be ignored or fails If it is ignored or fails depends on the failurePolicy See timeouts https kubernetes io docs reference access authn authz extensible admission controllers timeouts namespaceSelector object The selector used by the admission webhook controller to limit what namespaces where injection can happen If unset all non system namespaces are eligible for injection See Matching requests namespace selector https kubernetes io docs reference access authn authz extensible admission controllers matching requests namespaceselector yaml namespaceSelector matchLabels sidecar injector enabled objectSelector object The selector used by the admission webhook controller to limit what objects can be affected by mutation See Matching requests object selector https kubernetes io docs reference access authn authz extensible admission controllers matching requests objectselector yaml objectSelector matchLabels sidecar injector enabled annotations string or object Defines additional annotations to attach to the webhook This can either be YAML or a YAML formatted multi line templated string namespaceSelector dictionary Deprecated please use webhook namespaceSelector vault docs platform k8s helm configuration namespaceselector instead objectSelector dictionary Deprecated please use webhook objectSelector vault docs platform k8s helm configuration objectselector instead extraLabels dictionary This value defines additional labels for Vault Agent Injector pods yaml extraLabels sample label1 foo sample label2 bar certs The certs section configures how the webhook TLS certs are configured These are the TLS certs for the Kube apiserver communicating to the webhook By default the injector will generate and manage its own certs but this requires the ability for the injector to update its own MutatingWebhookConfiguration In a production environment custom certs should probably be used Configure the values below to enable this secretName string secretName is the name of the Kubernetes secret that has the TLS certificate and private key to serve the injector webhook If this is null then the injector will default to its automatic management mode caBundle string The PEM encoded CA public certificate bundle for the TLS certificate served by the injector This must be specified as a string and can t come from a secret because it must be statically configured on the Kubernetes MutatingAdmissionWebhook resource This only needs to be specified if secretName is not null certName string tls crt The name of the certificate file within the secretName secret keyName string tls key The name of the key file within the secretName secret extraEnvironmentVars dictionary Extra environment variables to set in the injector deployment yaml Example setting injector TLS options in a deployment extraEnvironmentVars AGENT INJECT TLS MIN VERSION tls13 AGENT INJECT TLS CIPHER SUITES affinity This value defines the affinity https kubernetes io docs concepts configuration assign pod node affinity and anti affinity for Vault Agent Injector pods This can either be multi line string or YAML matching the PodSpec s affinity field It defaults to allowing only a single pod on each node which minimizes risk of the cluster becoming unusable if a node is lost If you need to run more pods per node for example testing on Minikube set this value to null yaml Recommended default server affinity affinity podAntiAffinity requiredDuringSchedulingIgnoredDuringExecution labelSelector matchLabels app kubernetes io name agent injector app kubernetes io instance component webhook topologyKey kubernetes io hostname topologySpreadConstraints array Topology settings https kubernetes io docs concepts workloads pods pod topology spread constraints for injector pods This can either be YAML or a YAML formatted multi line templated string tolerations array Toleration Settings for injector pods This should be either a multi line string or YAML matching the Toleration array nodeSelector dictionary nodeSelector labels for injector pod assignment formatted as a muli line string or YAML map priorityClassName string Priority class for injector pods annotations dictionary This value defines additional annotations for injector pods This can either be YAML or a YAML formatted multi line templated string yaml annotations sample annotation1 foo sample annotation2 bar or annotations sample annotation1 foo sample annotation2 bar failurePolicy string Ignore Deprecated please use webhook failurePolicy vault docs platform k8s helm configuration failurepolicy instead webhookAnnotations dictionary Deprecated please use webhook annotations vault docs platform k8s helm configuration annotations 1 instead service The service section configures the Kubernetes service for the Vault Agent Injector annotations dictionary This value defines additional annotations to add to the Vault Agent Injector service This can either be YAML or a YAML formatted multi line templated string yaml annotations sample annotation1 foo sample annotation2 bar or annotations sample annotation1 foo sample annotation2 bar serviceAccount Injector serviceAccount specific config annotations dictionary Extra annotations to attach to the injector serviceAccount This can either be YAML or a YAML formatted multi line templated string hostNetwork boolean false When set to true configures the Vault Agent Injector to run on the host network This is useful when alternative cluster networking is used port int 8080 Configures the port the Vault Agent Injector listens on podDisruptionBudget dictionary A disruption budget limits the number of pods of a replicated application that are down simultaneously from voluntary disruptions yaml podDisruptionBudget maxUnavailable 1 strategy dictionary Strategy for updating the deployment This can be a multi line string or a YAML map yaml strategy rollingUpdate maxSurge 25 maxUnavailable 25 type RollingUpdate or strategy rollingUpdate maxSurge 25 maxUnavailable 25 type RollingUpdate livenessProbe Values that configure the liveness probe for the injector failureThreshold int 2 When set to a value configures how many probe failures will be tolerated by Kubernetes initialDelaySeconds int 60 Sets the initial delay of the liveness probe when the container starts periodSeconds int 5 When set to a value configures how often in seconds to perform the probe successThreshold int 1 When set to a value configures the minimum consecutive successes for the probe to be considered successful after having failed timeoutSeconds int 3 When set to a value configures the number of seconds after which the probe times out readinessProbe Values that configure the readiness probe for the injector failureThreshold int 2 When set to a value configures how many probe failures will be tolerated by Kubernetes initialDelaySeconds int 60 Sets the initial delay of the readiness probe when the container starts periodSeconds int 5 When set to a value configures how often in seconds to perform the probe successThreshold int 1 When set to a value configures the minimum consecutive successes for the probe to be considered successful after having failed timeoutSeconds int 3 When set to a value configures the number of seconds after which the probe times out startupProbe Values that configure the startup probe for the injector failureThreshold int 2 When set to a value configures how many probe failures will be tolerated by Kubernetes initialDelaySeconds int 60 Sets the initial delay of the startup probe when the container starts periodSeconds int 5 When set to a value configures how often in seconds to perform the probe successThreshold int 1 When set to a value configures the minimum consecutive successes for the probe to be considered successful after having failed timeoutSeconds int 3 When set to a value configures the number of seconds after which the probe times out server Values that configure running a Vault server within Kubernetes enabled boolean or string When set to true the Vault server will be created When set to defaults to the value of global enabled enterpriseLicense This value refers to a Kubernetes secret that you have created that contains your enterprise license If you are not using an enterprise image or if you plan to introduce the license key via another route then leave secretName blank or set it to null Requires Vault Enterprise 1 8 or later secretName string The name of the Kubernetes secret that holds the enterprise license The secret must be in the same namespace that Vault is installed into secretKey string license The key within the Kubernetes secret that holds the enterprise license image Values that configure the Vault Docker image repository string hashicorp vault The name of the Docker image for the containers running Vault tag string 1 18 1 The tag of the Docker image for the containers running Vault This should be pinned to a specific version when running in production Otherwise other changes to the chart may inadvertently upgrade your admission controller pullPolicy string IfNotPresent The pull policy for container images The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists updateStrategyType string OnDelete Configure the Update Strategy Type https kubernetes io docs concepts workloads controllers statefulset update strategies for the StatefulSet logLevel string Configures the Vault server logging verbosity If set this will override values defined in the Vault configuration file Supported log levels include trace debug info warn error logFormat string Configures the Vault server logging format If set this will override values defined in the Vault configuration file Supported log formats include standard json resources dictionary The resource requests and limits CPU memory etc for each container of the server This should be a YAML dictionary of a Kubernetes resource https kubernetes io docs concepts configuration manage resources containers object If this isn t specified then the pods won t request any specific amount of resources which limits the ability for Kubernetes to make efficient use of compute resources Setting this is highly recommended yaml resources requests memory 10Gi limits memory 10Gi ingress Values that configure Ingress services for Vault If deploying on OpenShift these ingress settings are ignored Use the route route configuration to expose Vault on OpenShift br br If ha ha is enabled the Ingress will point to the active vault server via the active Service This requires vault 1 4 and service registration vault docs configuration service registration kubernetes to be set in the vault config enabled boolean false When set to true an Ingress https kubernetes io docs concepts services networking ingress service will be created labels dictionary Labels for the ingress service annotations dictionary This value defines additional annotations to add to the Ingress service This can either be YAML or a YAML formatted multi line templated string yaml annotations kubernetes io ingress class nginx kubernetes io tls acme true or annotations kubernetes io ingress class nginx kubernetes io tls acme true ingressClassName string Specify the IngressClass https kubernetes io docs concepts services networking ingress ingress class that should be used to implement the Ingress activeService boolean true When HA mode is enabled and K8s service registration is being used configure the ingress to point to the Vault active service extraPaths array Configures extra paths to prepend to the host configuration This is useful when working with annotation based services yaml extraPaths path backend service name ssl redirect port number use annotation tls array Configures the TLS portion of the Ingress spec https kubernetes io docs concepts services networking ingress tls where hosts is a list of the hosts defined in the Common Name of the TLS certificate and secretName is the name of the Secret containing the required TLS files such as certificates and keys yaml tls hosts sslexample foo com sslexample bar com secretName testsecret tls hosts Values that configure the Ingress host rules host string chart example local Name of the host to use for Ingress paths array Deprecated server ingress extraPaths should be used instead A list of paths that will be directed to the Vault service At least one path is required yaml paths vault hostAliases array A list of aliases to be added to etc hosts Specified as a YAML list following the hostAlias format https kubernetes io docs tasks network customize hosts file for pods route Values that configure Route services for Vault in OpenShift If ha ha is enabled the Route will point to the active vault server via the active Service requires vault 1 4 and service registration vault docs configuration service registration kubernetes to be set in the vault config enabled boolean false When set to true a Route for Vault will be created activeService boolean true When HA mode is enabled and K8s service registration is being used configure the route to point to the Vault active service labels dictionary Labels for the Route annotations dictionary Annotations to add to the Route This can either be YAML or a YAML formatted multi line templated string host string chart example local Sets the hostname for the Route tls dictionary termination passthrough TLS config that will be passed directly to the route s TLS config which can be used to configure other termination methods that terminate TLS at the router authDelegator Values that configure the Cluster Role Binding attached to the Vault service account enabled boolean true When set to true a Cluster Role Binding will be bound to the Vault service account This Cluster Role Binding has the necessary privileges for Vault to use the Kubernetes Auth Method vault docs auth kubernetes readinessProbe Values that configure the readiness probe for the Vault pods enabled boolean true When set to true a readiness probe will be applied to the Vault pods path string When set to a value enables HTTP HTTPS probes instead of using the default exec probe The http https scheme is controlled by the tlsDisable value failureThreshold int 2 When set to a value configures how many probe failures will be tolerated by Kubernetes initialDelaySeconds int 5 When set to a value configures the number of seconds after the container has started before probe initiates periodSeconds int 5 When set to a value configures how often in seconds to perform the probe successThreshold int 1 When set to a value configures the minimum consecutive successes for the probe to be considered successful after having failed timeoutSeconds int 3 When set to a value configures the number of seconds after which the probe times out port int 8200 When set to a value overrides the default port used for the server readiness probe yaml readinessProbe enabled true path v1 sys health standbyok true failureThreshold 2 initialDelaySeconds 5 periodSeconds 5 successThreshold 1 timeoutSeconds 3 port 8200 livenessProbe Values that configure the liveness probe for the Vault pods enabled boolean false When set to true a liveness probe will be applied to the Vault pods execCommand array Used to define a liveness exec command If provided exec is preferred to httpGet path as the livenessProbe handler yaml execCommand bin sh c vault userconfig mylivenessscript run sh path string v1 sys health standbyok true Path for the livenessProbe to use httpGet as the livenessProbe handler The http https scheme is controlled by the tlsDisable value initialDelaySeconds int 60 Sets the initial delay of the liveness probe when the container starts failureThreshold int 2 When set to a value configures how many probe failures will be tolerated by Kubernetes periodSeconds int 5 When set to a value configures how often in seconds to perform the probe successThreshold int 1 When set to a value configures the minimum consecutive successes for the probe to be considered successful after having failed timeoutSeconds int 3 When set to a value configures the number of seconds after which the probe times out port int 8200 Port number on which livenessProbe will be checked if httpGet is used as the livenessProbe handler yaml livenessProbe enabled true path v1 sys health standbyok true initialDelaySeconds 60 failureThreshold 2 periodSeconds 5 successThreshold 1 timeoutSeconds 3 port 8200 terminationGracePeriodSeconds int 10 Optional duration in seconds the pod needs to terminate gracefully See https kubernetes io docs concepts containers container lifecycle hooks preStopSleepSeconds int 5 Used to set the sleep time during the preStop step postStart array Used to define commands to run after the pod is ready This can be used to automate processes such as initialization or bootstrapping auth methods yaml postStart bin sh c vault userconfig myscript run sh extraInitContainers array null extraInitContainers is a list of init containers Specified as a YAML list This is useful if you need to run a script to provision TLS certificates or write out configuration files in a dynamic way extraContainers array null The extra containers to be applied to the Vault server pods yaml extraContainers name mycontainer image app 0 0 0 env extraEnvironmentVars dictionary The extra environment variables to be applied to the Vault server yaml Extra Environment Variables are defined as key value strings extraEnvironmentVars GOOGLE REGION global GOOGLE PROJECT myproject GOOGLE APPLICATION CREDENTIALS vault userconfig myproject myproject creds json shareProcessNamespace boolean false Enables process namespace sharing between Vault and the extraContainers This is useful if Vault must be signaled e g to send a SIGHUP for log rotation extraArgs string null The extra arguments to be applied to the Vault server startup command yaml extraArgs config path to extra config hcl log format json extraPorts array additional ports to add to the server statefulset yaml extraPorts containerPort 8300 name http monitoring extraSecretEnvironmentVars array The extra environment variables populated from a secret to be applied to the Vault server envName string required Name of the environment variable to be populated in the Vault container secretName string required Name of Kubernetes secret used to populate the environment variable defined by envName secretKey string required Name of the key where the requested secret value is located in the Kubernetes secret yaml Extra Environment Variables populated from a secret extraSecretEnvironmentVars envName AWS SECRET ACCESS KEY secretName vault secretKey AWS SECRET ACCESS KEY extraVolumes array Deprecated please use volumes instead A list of extra volumes to mount to Vault servers This is useful for bringing in extra data that can be referenced by other configurations at a well known path such as TLS certificates The value of this should be a list of objects Each object supports the following keys type string required Type of the volume must be one of configMap or secret Case sensitive name string required Name of the configMap or secret to be mounted This also controls the path that it is mounted to The volume will be mounted to vault userconfig name by default unless path is configured path string vault userconfigs Name of the path where a configMap or secret is mounted If not specified the volume will be mounted to vault userconfig name of volume defaultMode string 420 Default mode of the mounted files yaml extraVolumes type secret name vault certs path etc pki volumes array null A list of volumes made available to all containers This takes standard Kubernetes volume definitions yaml volumes name plugins emptyDir volumeMounts array null A list of volumes mounts made available to all containers This takes standard Kubernetes volume definitions yaml volumeMounts mountPath usr local libexec vault name plugins readOnly true affinity This value defines the affinity https kubernetes io docs concepts configuration assign pod node affinity and anti affinity for server pods This should be either a multi line string or YAML matching the PodSpec s affinity field It defaults to allowing only a single pod on each node which minimizes risk of the cluster becoming unusable if a node is lost If you need to run more pods per node for example testing on Minikube set this value to null yaml Recommended default server affinity affinity podAntiAffinity requiredDuringSchedulingIgnoredDuringExecution labelSelector matchLabels app kubernetes io name app kubernetes io instance component server topologyKey kubernetes io hostname topologySpreadConstraints array Topology settings https kubernetes io docs concepts workloads pods pod topology spread constraints for server pods This can either be YAML or a YAML formatted multi line templated string tolerations array This value defines the tolerations https kubernetes io docs concepts configuration taint and toleration that are acceptable when being scheduled This should be either a multi line string or YAML matching the Toleration array in a PodSpec yaml tolerations key node kubernetes io unreachable operator Exists effect NoExecute tolerationSeconds 6000 nodeSelector dictionary This value defines additional node selection criteria for more control over where the Vault servers are deployed This should be formatted as a multi line string or YAML map yaml nodeSelector disktype ssd networkPolicy Values that configure the Vault Network Policy enabled boolean false When set to true enables a Network Policy for the Vault cluster egress array This value configures the egress https kubernetes io docs concepts services networking network policies network policy rules yaml egress to ipBlock cidr 10 0 0 0 24 ports protocol TCP port 8200 ingress array This value configures the ingress https kubernetes io docs concepts services networking network policies network policy rules The default is below yaml ingress from namespaceSelector ports port 8200 protocol TCP port 8201 protocol TCP priorityClassName string Priority class for server pods extraLabels dictionary This value defines additional labels for server pods yaml extraLabels sample label1 foo sample label2 bar annotations dictionary This value defines additional annotations for server pods This can either be YAML or a YAML formatted multi line templated string yaml annotations sample annotation1 foo sample annotation2 bar or annotations sample annotation1 foo sample annotation2 bar includeConfigAnnotation boolean false Add an annotation to the server configmap and the statefulset pods vaultproject io config checksum that is a hash of the Vault configuration This can be used together with an OnDelete deployment strategy to help identify which pods still need to be deleted during a deployment to pick up any configuration changes service Values that configure the Kubernetes service created for Vault These options are also used for the active and standby services when ha ha is enabled enabled boolean true When set to true a Kubernetes service will be created for Vault active Values that apply only to the vault active service enabled boolean true When set to true the vault active Kubernetes service will be created for Vault selecting pods which label themselves as the cluster leader with vault active true annotations dictionary Extra annotations for the active service definition This can either be YAML or a YAML formatted multi line templated string standby Values that apply only to the vault standby service enabled boolean true When set to true the vault standby Kubernetes service will be created for Vault selecting pods which label themselves as a cluster follower with vault active false annotations dictionary Extra annotations for the standby service definition This can either be YAML or a YAML formatted multi line templated string clusterIP string ClusterIP controls whether an IP address cluster IP is attached to the Vault service within Kubernetes By default the Vault service will be given a Cluster IP address set to None to disable When disabled Kubernetes will create a headless service Headless services can be used to communicate with pods directly through DNS instead of a round robin load balancer type string ClusterIP Sets the type of service to create such as NodePort externalTrafficPolicy string Cluster The externalTrafficPolicy https kubernetes io docs concepts services networking service external traffic policy can be set to either Cluster or Local and is only valid for LoadBalancer and NodePort service types port int 8200 Port on which Vault server is listening inside the pod targetPort int 8200 Port on which the service is listening nodePort int When type is set to NodePort the bound node port can be configured using this value A random port will be assigned if this is left blank activeNodePort int When HA mode is enabled If type is set to NodePort a specific nodePort value can be configured for the active service and will be random if left blank standbyNodePort int When HA mode is enabled If type is set to NodePort a specific nodePort value can be configured for the standby service will be random if left blank publishNotReadyAddresses boolean true If true do not wait for pods to be ready before including them in the services targets Does not apply to the headless service which is used for cluster internal communication instanceSelector enabled boolean true When set to false the service selector used for the vault vault active and vault standby services will not filter on app kubernetes io instance This means they may select pods from outside this deployment of the Helm chart Does not affect the headless vault internal service with ClusterIP None annotations dictionary This value defines additional annotations for the service This can either be YAML or a YAML formatted multi line templated string yaml annotations sample annotation1 foo sample annotation2 bar or annotations sample annotation1 foo sample annotation2 bar ipFamilyPolicy string The IP family and IP families options are to set the behaviour in a dual stack environment Omitting these values will let the service fall back to whatever the CNI dictates the defaults should be These are only supported for kubernetes versions 1 23 The service s supported IP family policy https kubernetes io docs concepts services networking dual stack services can be either SingleStack PreferDualStack or RequireDualStack serviceIPFamilies array Sets the families that should be supported and the order in which they should be applied to ClusterIP as well Can be IPv4 and or IPv6 serviceAccount Values that configure the Kubernetes service account created for Vault create boolean true If set to true creates a service account used by Vault name string Name of the service account to use If not set and create is true a name is generated using the name of the installation default is vault createSecret boolean false Create a Kubernetes Secret object to store a non expiring token for the service account Prior to Kubernetes 1 24 0 Kubernetes used to generate this secret for each service account by default Kubernetes recommends using short lived tokens from the TokenRequest API or projected volumes instead if possible For more details see https kubernetes io docs concepts configuration secret service account token secrets server serviceAccount create must be equal to true in order to use this feature annotations dictionary This value defines additional annotations for the service account This can either be YAML or a YAML formatted multi line templated string yaml annotations sample annotation1 foo sample annotation2 bar or annotations sample annotation1 foo sample annotation2 bar extraLabels dictionary This value defines additional labels for the Vault Server service account yaml extraLabels sample label1 foo sample label2 bar serviceDiscovery Values that configure permissions required for Vault Server to automatically discover and join a Vault cluster using pod metadata enabled boolean true Enable or disable a service account role binding with the permissions required for Vault s Kubernetes service registration vault docs configuration service registration kubernetes config option dataStorage This configures the volume used for storing Vault data when not using external storage such as Consul enabled boolean true Enables a persistent volume to be created for storing Vault data when not using an external storage service size string 10Gi Size of the volume to be created for Vault s data storage when not using an external storage service storageClass string null Name of the storage class to use when creating the data storage volume mountPath string vault data Configures the path in the Vault pod where the data storage will be mounted accessMode string ReadWriteOnce Type of access mode of the storage device See the official Kubernetes https kubernetes io docs concepts storage persistent volumes access modes for more information annotations dictionary This value defines additional annotations to add to the data PVCs This can either be YAML or a YAML formatted multi line templated string yaml annotations kubernetes io my pvc foobar or annotations kubernetes io my pvc foobar labels dictionary This value defines additional labels to add to the data PVCs This can either be YAML or a YAML formatted multi line templated string persistentVolumeClaimRetentionPolicy dictionary Specifies the Persistent Volume Claim PVC retention policy https kubernetes io docs concepts workloads controllers statefulset persistentvolumeclaim retention yaml persistentVolumeClaimRetentionPolicy whenDeleted Retain whenScaled Retain auditStorage This configures the volume used for storing Vault s audit logs See the Vault documentation vault docs audit for more information enabled boolean false Enables a persistent volume to be created for storing Vault s audit logs size string 10Gi Size of the volume to be created for Vault s audit logs storageClass string null Name of the storage class to use when creating the audit storage volume mountPath string vault audit Configures the path in the Vault pod where the audit storage will be mounted accessMode string ReadWriteOnce Type of access mode of the storage device annotations dictionary This value defines additional annotations to add to the audit PVCs This can either be YAML or a YAML formatted multi line templated string yaml annotations kubernetes io my pvc foobar or annotations kubernetes io my pvc foobar labels dictionary This value defines additional labels to add to the audit PVCs This can either be YAML or a YAML formatted multi line templated string dev This configures dev mode for the Vault server enabled boolean false Enables dev mode for the Vault server This mode is useful for experimenting with Vault without needing to unseal devRootToken string root Configures the root token for the Vault development server Security Warning Never ever ever run a dev mode server in production It is insecure and will lose data on every restart since it stores data in memory It is only made for development or experimentation standalone This configures standalone mode for the Vault server enabled boolean true Enables standalone mode for the Vault server This mode uses the file storage backend and requires a volume for persistence dataStorage config string or object A raw string of extra HCL or JSON configuration vault docs configuration for Vault servers This will be saved as is into a ConfigMap that is read by the Vault servers This can be used to add additional configuration that isn t directly exposed by the chart If an object is provided it will be written as JSON yaml ExtraConfig values are formatted as a multi line string config api addr http POD IP 8200 listener tcp tls disable 1 address 0 0 0 0 8200 storage file path vault data This can also be set using Helm s set flag vault helm v0 1 0 and later using the following syntax shell set server standalone config listener tcp address 0 0 0 0 8200 ha This configures ha mode for the Vault server enabled boolean false Enables ha mode for the Vault server This mode uses a highly available backend storage such as Consul to store Vault s data By default this is configured to use Consul Helm https github com hashicorp consul k8s For a complete list of storage backends see the Vault documentation vault docs configuration apiAddr string Set the API address configuration for a Vault cluster If set to an empty string the pod IP address is used clusterAddr string null Set the cluster addr vault docs configuration cluster addr configuration for Vault HA If null defaults to https HOSTNAME internal 8201 raft This configures raft integrated storage mode for the Vault server enabled boolean false Enables raft integrated storage mode for the Vault server This mode uses persistent volumes for storage setNodeId boolean false Set the Node Raft ID to the name of the pod config string or object A raw string of extra HCL or JSON configuration vault docs configuration for Vault servers This will be saved as is into a ConfigMap that is read by the Vault servers This can be used to add additional configuration that isn t directly exposed by the chart If an object is provided it will be written as JSON replicas int 3 The number of pods to deploy to create a highly available cluster of Vault servers updatePartition int 0 If an updatePartition is specified all Pods with an ordinal that is greater than or equal to the partition will be updated when the StatefulSet s spec template is updated If set to 0 this disables partition updates For more information see the official Kubernetes documentation https kubernetes io docs concepts workloads controllers statefulset rolling updates config string or object A raw string of extra HCL or JSON configuration vault docs configuration for Vault servers This will be saved as is into a ConfigMap that is read by the Vault servers This can be used to add additional configuration that isn t directly exposed by the chart If an object is provided it will be written as JSON yaml ExtraConfig values are formatted as a multi line string config ui true api addr http POD IP 8200 listener tcp tls disable 1 address 0 0 0 0 8200 storage consul path vault address HOST IP 8500 This can also be set using Helm s set flag vault helm v0 1 0 and later using the following syntax shell set server ha config listener tcp address 0 0 0 0 8200 disruptionBudget Values that configures the disruption budget policy See the official Kubernetes documentation https kubernetes io docs tasks run application configure pdb for more information enabled boolean true Enables disruption budget policy to limit the number of pods that are down simultaneously from voluntary disruptions maxUnavailable int null The maximum number of unavailable pods By default this will be automatically computed based on the server replicas value to be n 2 1 If you need to set this to 0 you will need to add a set server disruptionBudget maxUnavailable 0 flag to the helm chart installation command because of a limitation in the Helm templating language statefulSet This configures settings for the Vault Statefulset annotations dictionary This value defines additional annotations to add to the Vault statefulset This can either be YAML or a YAML formatted multi line templated string yaml annotations kubernetes io my statefulset foobar or annotations kubernetes io my statefulset foobar securityContext Set the Pod and container security contexts pod dictionary Defines the securityContext for the server Pods as YAML or a YAML formatted multi line templated string Default if not specified and global openshift false yaml runAsNonRoot true runAsGroup runAsUser fsGroup Defaults to empty if not specified and global openshift true container dictionary Defines the securityContext for the server containers as YAML or a YAML formatted multi line templated string Default if not specified and global openshift false yaml allowPrivilegeEscalation false Defaults to empty if not specified and global openshift true ui Values that configure the Vault UI enabled boolean false If true the UI will be enabled The UI will only be enabled on Vault servers If server enabled is false then this setting has no effect To expose the UI in some way you must configure ui service serviceType string ClusterIP The service type to register This defaults to ClusterIP The available service types are documented on the Kubernetes website https kubernetes io docs concepts services networking service publishing services service types publishNotReadyAddresses boolean true If set to true will route traffic to Vault pods that aren t ready if they re sealed or uninitialized activeVaultPodOnly boolean false If set to true the UI service will only route to the active pod in a Vault HA cluster serviceNodePort int null Sets the Node Port value when using serviceType NodePort on the Vault UI service externalPort int 8200 Sets the external port value of the service targetPort int 8200 Sets the target port value of the service serviceIPFamilyPolicy string The IP family and IP families options are to set the behaviour in a dual stack environment Omitting these values will let the service fall back to whatever the CNI dictates the defaults should be These are only supported for kubernetes versions 1 23 The service s supported IP family policy https kubernetes io docs concepts services networking dual stack services can be either SingleStack PreferDualStack or RequireDualStack serviceIPFamilies array Sets the families that should be supported and the order in which they should be applied to ClusterIP as well Can be IPv4 and or IPv6 externalTrafficPolicy string Cluster The externalTrafficPolicy https kubernetes io docs concepts services networking service external traffic policy can be set to either Cluster or Local and is only valid for LoadBalancer and NodePort service types loadBalancerSourceRanges array This value defines additional source CIDRs when using serviceType LoadBalancer yaml loadBalancerSourceRanges 10 0 0 0 16 120 78 23 3 32 loadBalancerIP string This value defines the IP address of the load balancer when using serviceType LoadBalancer annotations dictionary This value defines additional annotations for the UI service This can either be YAML or a YAML formatted multi line templated string yaml annotations sample annotation1 foo sample annotation2 bar or annotations sample annotation1 foo sample annotation2 bar csi Values that configure running the Vault CSI Provider enabled boolean false When set to true the Vault CSI Provider daemonset will be created image Values that configure the Vault CSI Provider Docker image repository string hashicorp vault csi provider The name of the Docker image for the Vault CSI Provider tag string 1 5 0 The tag of the Docker image for the Vault CSI Provider This should be pinned to a specific version when running in production Otherwise other changes to the chart may inadvertently upgrade your CSI provider pullPolicy string IfNotPresent The pull policy for container images The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists locally volumes array null A list of volumes made available to all containers This takes standard Kubernetes volume definitions yaml volumes name plugins emptyDir volumeMounts array null A list of volumes mounts made available to all containers This takes standard Kubernetes volume mount definitions yaml volumeMounts mountPath usr local libexec vault name plugins readOnly true resources dictionary The resource requests and limits CPU memory etc for each of the CSI containers This should be a YAML dictionary of a Kubernetes resource https kubernetes io docs concepts configuration manage resources containers object If this isn t specified then the pods won t request any specific amount of resources which limits the ability for Kubernetes to make efficient use of compute resources br Setting this is highly recommended yaml resources requests memory 10Gi limits memory 10Gi hmacSecretName string Override the default secret name for the CSI Provider s HMAC key used for generating secret versions hostNetwork bool false Set the hostNetwork parameter on the CSI Provider pods to avoid the need of a dedicated pod ip daemonSet Values that configure the Vault CSI Provider daemonSet updateStrategy Values that configure the Vault CSI Provider update strategy type string RollingUpdate The type of update strategy https kubernetes io docs concepts workloads controllers statefulset update strategies to be used when the daemonset is updated using Helm upgrades maxUnavailable int null The maximum number of unavailable pods during an upgrade annotations dictionary This value defines additional annotations to add to the Vault CSI Provider daemonset This can either be YAML or a YAML formatted multi line templated string yaml annotations foo bar or annotations foo bar extraLabels dictionary This value defines additional labels for the CSI provider daemonset providersDir string etc kubernetes secrets store csi providers Provider host path must match the CSI provider s path kubeletRootDir string var lib kubelet Kubelet host path securityContext Security context for the pod template and container in the csi provider daemonSet pod dictionary Pod level securityContext May be specified as YAML or a YAML formatted multi line templated string container dictionary Container level securityContext May be specified as YAML or a YAML formatted multi line templated string pod Values that configure the Vault CSI Provider pod annotations dictionary This value defines additional annotations to add to the Vault CSI Provider pods This can either be YAML or a YAML formatted multi line templated string yaml annotations foo bar or annotations foo bar extraLabels dictionary This value defines additional labels for CSI provider pods nodeSelector dictionary nodeSelector https kubernetes io docs concepts configuration assign pod node nodeselector labels for csi pod assignment formatted as a multi line string or YAML map yaml nodeSelector beta kubernetes io arch amd64 affinity dictionary This should be either a multi line string or YAML matching the PodSpec s affinity field tolerations array Toleration Settings for CSI pods This should be a multi line string or YAML matching the Toleration array in a PodSpec priorityClassName string Priority class for CSI Provider pods serviceAccount Values that configure the Vault CSI Provider s serviceaccount annotations dictionary This value defines additional annotations for the serviceAccount definition This can either be YAML or a YAML formatted multi line templated string yaml annotations foo bar or annotations foo bar extraLabels dictionary This value defines additional labels for the CSI provider service account readinessProbe Values that configure the readiness probe for the Vault CSI Provider pods failureThreshold int 2 When set to a value configures how many probe failures will be tolerated by Kubernetes initialDelaySeconds int 5 When set to a value configures the number of seconds after the container has started before probe initiates periodSeconds int 5 When set to a value configures how often in seconds to perform the probe successThreshold int 1 When set to a value configures the minimum consecutive successes for the probe to be considered successful after having failed timeoutSeconds int 3 When set to a value configures the number of seconds after which the probe times out livenessProbe Values that configure the liveness probe for the Vault CSI Provider pods initialDelaySeconds int 5 Sets the initial delay of the liveness probe when the container starts failureThreshold int 2 When set to a value configures how many probe failures will be tolerated by Kubernetes periodSeconds int 5 When set to a value configures how often in seconds to perform the probe successThreshold int 1 When set to a value configures the minimum consecutive successes for the probe to be considered successful after having failed timeoutSeconds int 3 When set to a value configures the number of seconds after which the probe times out logLevel string info Configures the log level for the Vault CSI provider Supported log levels include trace debug info warn error and off debug bool false Deprecated set logLevel to debug instead When set to true enables debug logging on the Vault CSI Provider daemonset extraArgs array The extra arguments to be applied to the CSI pod startup command See here vault docs platform k8s csi configurations command line arguments for available flags agent Configures the Vault Agent sidecar for the CSI Provider enabled bool true whether to enable the agent sidecar for the CSI provider extraArgs array The extra arguments to be applied to the agent startup command image Values that configure the Vault Agent sidecar image for the CSI Provider pullPolicy string IfNotPresent The pull policy for agent image The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists repository string hashicorp vault The name of the Docker image for the Vault Agent sidecar This should be set to the official Vault Docker image tag string 1 18 1 The tag of the Vault Docker image to use for the Vault Agent Sidecar logFormat string standard logLevel string info resources dictionary The resource requests and limits CPU memory etc for the agent This should be a YAML dictionary of a Kubernetes resource https kubernetes io docs concepts configuration manage resources containers object yaml resources requests memory 256Mi cpu 250m limits memory 256Mi cpu 250m serverTelemetry Values the configure metrics and telemetry Enabling these features requires setting the telemetry stanza in the Vault configuration See the telemetry vault docs configuration telemetry docs vault docs internals telemetry for more on the Vault configuration If authorization is not set for authenticating to Vault s metrics endpoint the following Vault server telemetry config must be included in the listener tcp stanza of the Vault configuration yaml listener tcp tls disable 1 address 0 0 0 0 8200 telemetry unauthenticated metrics access true In addition a top level telemetry stanza must also be included in the Vault configuration such as yaml telemetry prometheus retention time 30s disable hostname true serviceMonitor Values that configure monitoring the Vault server enabled boolean false When set to true enable deployment of the Vault Server ServiceMonitor CustomResource The Prometheus operator must be installed before enabling this feature If not the chart will fail to install due to missing CustomResourceDefinitions provided by the operator Instructions on how to install the Helm chart can be found here https github com prometheus community helm charts tree main charts kube prometheus stack More information can be found here in the these https github com prometheus operator prometheus operator repositories https github com prometheus operator kube prometheus selectors dictionary Selector labels to add to the ServiceMonitor interval string 30s Interval at which Prometheus scrapes metrics scrapeTimeout string 10s Timeout for Prometheus scrapes tlsConfig dictionary tlsConfig used for scraping the Vault metrics API See the prometheus API reference https prometheus operator dev docs api reference api monitoring coreos com v1 TLSConfig for more details yaml tlsConfig ca secret name vault metrics client key ca crt authorization dictionary Authorization used for scraping the Vault metrics API See the prometheus API reference https prometheus operator dev docs api reference api monitoring coreos com v1 SafeAuthorization for more details yaml authorization credentials name vault metrics client key token prometheusRules Values that configure Prometheus rules enabled boolean false Deploy the PrometheusRule custom resource for AlertManager based alerts Requires that AlertManager is properly deployed selectors dictionary Selector labels to add to the Prometheus rules rules array Prometheus rules to create For example yaml rules alert vault HighResponseTime annotations message The response time of Vault is over 500ms on average over the last 5 minutes expr vault core handle request quantile 0 5 namespace mynamespace 500 for 5m labels severity warning alert vault HighResponseTime annotations message The response time of Vault is over 1s on average over the last 5 minutes expr vault core handle request quantile 0 5 namespace mynamespace 1000 for 5m labels severity critical |
vault Vault enterprise license management You can use this Helm chart to deploy Vault Enterprise by following a few extra steps around licensing layout docs Vault Helm supports deploying Vault Enterprise including license autoloading page title Vault Enterprise License Management Kubernetes | ---
layout: docs
page_title: Vault Enterprise License Management - Kubernetes
description: >-
Vault Helm supports deploying Vault Enterprise, including license autoloading.
---
# Vault enterprise license management
You can use this Helm chart to deploy Vault Enterprise by following a few extra steps around licensing.
~> **Note:** As of Vault Enterprise 1.8, the license must be specified via HCL configuration or environment variables on startup, unless the Vault cluster was created with an older Vault version and the license was stored. More information is available in the [Vault Enterprise License docs](/vault/docs/enterprise/license).
@include 'helm/version.mdx'
## Vault enterprise 1.8+
### License install
First create a Kubernetes secret using the contents of your license file. For example, the following commands create a secret with the name `vault-ent-license` and key `license`:
```bash
secret=$(cat 1931d1f4-bdfd-6881-f3f5-19349374841f.hclic)
kubectl create secret generic vault-ent-license --from-literal="license=${secret}"
```
-> **Note:** If you cannot find your `.hclic` file, please contact your sales team or Technical Account Manager.
In your chart overrides, set the values of [`server.image`](/vault/docs/platform/k8s/helm/configuration#image-2) to one of the enterprise [release tags](https://hub.docker.com/r/hashicorp/vault-enterprise/tags). Also set the name of the secret you just created in [`server.enterpriseLicense`](/vault/docs/platform/k8s/helm/configuration#enterpriselicense).
```yaml
# config.yaml
server:
image:
repository: hashicorp/vault-enterprise
tag: 1.18.1-ent
enterpriseLicense:
secretName: vault-ent-license
```
Now run `helm install`:
```shell-session
$ helm install hashicorp hashicorp/vault -f config.yaml
```
Once the cluster is [initialized and unsealed](/vault/docs/platform/k8s/helm/run), you may check the license status using the `vault license get` command:
```shell
kubectl exec -ti vault-0 -- vault license get
```
### License update
To update the autoloaded license in Vault, you may do the following:
- Update your license secret with the new license data
```shell
new_secret=$(base64 < ./new-license.hclic | tr -d '\n')
cat > patch-license.yaml <<EOF
data:
license: ${new_secret}
EOF
kubectl patch secret vault-ent-license --patch "$(cat patch-license.yaml)"
```
- Wait until [`vault license inspect`](/vault/docs/commands/license/inspect) shows the updated license
Since the `inspect` command is reading the license file from the mounted secret, this tells you when the updated secret has been propagated to the mount on the Vault pod.
```shell
kubectl exec vault-0 -- vault license inspect
```
- Reload Vault's license config
You may use the [`sys/config/reload/license` API endpoint](/vault/api-docs/system/config-reload#reload-license-file):
```shell
kubectl exec vault-0 -- vault write -f sys/config/reload/license
```
Or you may issue an HUP signal directly to Vault:
```shell
kubectl exec vault-0 -- pkill -HUP vault
```
- Verify that [`vault license get`](/vault/docs/commands/license/get) shows the updated license
```shell
kubectl exec vault-0 -- vault license get
```
## Vault enterprise prior to 1.8
In your chart overrides, set the values of `server.image` to one of the enterprise [release tags](https://hub.docker.com/r/hashicorp/vault-enterprise/tags). Install the chart, and initialize and unseal vault as described in [Running Vault](/vault/docs/platform/k8s/helm/run).
After Vault has been initialized and unsealed, setup a port-forward tunnel to the Vault Enterprise cluster:
```shell
kubectl port-forward vault-0 8200:8200
```
Next, in a separate terminal, create a `payload.json` file that contains the license key like this example:
```json
{
"text": "01ABCDEFG..."
}
```
Finally, using curl, apply the license key to the Vault API:
```bash
curl \
--header "X-Vault-Token: VAULT_LOGIN_TOKEN_HERE" \
--request POST \
--data @payload.json \
http://127.0.0.1:8200/v1/sys/license
```
To verify that the license installation worked correctly, using `curl`, run the following:
```shell
curl \
--header "X-Vault-Token: VAULT_LOGIN_TOKEN_HERE" \
http://127.0.0.1:8200/v1/sys/license
``` | vault | layout docs page title Vault Enterprise License Management Kubernetes description Vault Helm supports deploying Vault Enterprise including license autoloading Vault enterprise license management You can use this Helm chart to deploy Vault Enterprise by following a few extra steps around licensing Note As of Vault Enterprise 1 8 the license must be specified via HCL configuration or environment variables on startup unless the Vault cluster was created with an older Vault version and the license was stored More information is available in the Vault Enterprise License docs vault docs enterprise license include helm version mdx Vault enterprise 1 8 License install First create a Kubernetes secret using the contents of your license file For example the following commands create a secret with the name vault ent license and key license bash secret cat 1931d1f4 bdfd 6881 f3f5 19349374841f hclic kubectl create secret generic vault ent license from literal license secret Note If you cannot find your hclic file please contact your sales team or Technical Account Manager In your chart overrides set the values of server image vault docs platform k8s helm configuration image 2 to one of the enterprise release tags https hub docker com r hashicorp vault enterprise tags Also set the name of the secret you just created in server enterpriseLicense vault docs platform k8s helm configuration enterpriselicense yaml config yaml server image repository hashicorp vault enterprise tag 1 18 1 ent enterpriseLicense secretName vault ent license Now run helm install shell session helm install hashicorp hashicorp vault f config yaml Once the cluster is initialized and unsealed vault docs platform k8s helm run you may check the license status using the vault license get command shell kubectl exec ti vault 0 vault license get License update To update the autoloaded license in Vault you may do the following Update your license secret with the new license data shell new secret base64 new license hclic tr d n cat patch license yaml EOF data license new secret EOF kubectl patch secret vault ent license patch cat patch license yaml Wait until vault license inspect vault docs commands license inspect shows the updated license Since the inspect command is reading the license file from the mounted secret this tells you when the updated secret has been propagated to the mount on the Vault pod shell kubectl exec vault 0 vault license inspect Reload Vault s license config You may use the sys config reload license API endpoint vault api docs system config reload reload license file shell kubectl exec vault 0 vault write f sys config reload license Or you may issue an HUP signal directly to Vault shell kubectl exec vault 0 pkill HUP vault Verify that vault license get vault docs commands license get shows the updated license shell kubectl exec vault 0 vault license get Vault enterprise prior to 1 8 In your chart overrides set the values of server image to one of the enterprise release tags https hub docker com r hashicorp vault enterprise tags Install the chart and initialize and unseal vault as described in Running Vault vault docs platform k8s helm run After Vault has been initialized and unsealed setup a port forward tunnel to the Vault Enterprise cluster shell kubectl port forward vault 0 8200 8200 Next in a separate terminal create a payload json file that contains the license key like this example json text 01ABCDEFG Finally using curl apply the license key to the Vault API bash curl header X Vault Token VAULT LOGIN TOKEN HERE request POST data payload json http 127 0 0 1 8200 v1 sys license To verify that the license installation worked correctly using curl run the following shell curl header X Vault Token VAULT LOGIN TOKEN HERE http 127 0 0 1 8200 v1 sys license |
vault sidebar current docs platform k8s terraform page title Configure Vault Helm using Terraform Configuring Vault helm with terraform layout docs Describes how to configure the Vault Helm chart using Terraform | ---
layout: 'docs'
page_title: 'Configure Vault Helm using Terraform'
sidebar_current: 'docs-platform-k8s-terraform'
description: |-
Describes how to configure the Vault Helm chart using Terraform
---
# Configuring Vault helm with terraform
Terraform may also be used to configure and deploy the Vault Helm chart, by using the [Helm provider](https://registry.terraform.io/providers/hashicorp/helm/latest/docs).
For example, to configure the chart to deploy [HA Vault with integrated storage (raft)](/vault/docs/platform/k8s/helm/examples/ha-with-raft), the values overrides can be set on the command-line, in a values yaml file, or with a Terraform configuration:
<CodeTabs>
<CodeBlockConfig>
```shell-session
$ helm install vault hashicorp/vault \
--set='server.ha.enabled=true' \
--set='server.ha.raft.enabled=true'
```
</CodeBlockConfig>
<CodeBlockConfig>
```yaml
server:
ha:
enabled: true
raft:
enabled: true
```
</CodeBlockConfig>
<CodeBlockConfig>
```hcl
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
resource "helm_release" "vault" {
name = "vault"
repository = "https://helm.releases.hashicorp.com"
chart = "vault"
set {
name = "server.ha.enabled"
value = "true"
}
set {
name = "server.ha.raft.enabled"
value = "true"
}
}
```
</CodeBlockConfig>
</CodeTabs>
The values file can also be used directly in the Terraform configuration with the [`values` directive](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#values#values).
## Further examples
### Vault config as a multi-line string
<CodeTabs>
<CodeBlockConfig>
```yaml
server:
ha:
enabled: true
raft:
enabled: true
setNodeId: true
config: |
ui = false
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}
seal "awskms" {
region = "us-west-2"
kms_key_id = "alias/my-kms-key"
}
```
</CodeBlockConfig>
<CodeBlockConfig>
```hcl
resource "helm_release" "vault" {
name = "vault"
repository = "https://helm.releases.hashicorp.com"
chart = "vault"
set {
name = "server.ha.enabled"
value = "true"
}
set {
name = "server.ha.raft.enabled"
value = "true"
}
set {
name = "server.ha.raft.setNodeId"
value = "true"
}
set {
name = "server.ha.raft.config"
value = <<EOT
ui = false
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}
seal "awskms" {
region = "us-west-2"
kms_key_id = "alias/my-kms-key"
}
EOT
}
}
```
</CodeBlockConfig>
</CodeTabs>
### Lists of volumes and volumeMounts
<CodeTabs>
<CodeBlockConfig>
```yaml
server:
volumes:
- name: userconfig-my-gcp-iam
secret:
defaultMode: 420
secretName: my-gcp-iam
volumeMounts:
- mountPath: /vault/userconfig/my-gcp-iam
name: userconfig-my-gcp-iam
readOnly: true
```
</CodeBlockConfig>
<CodeBlockConfig>
```hcl
resource "helm_release" "vault" {
name = "vault"
repository = "https://helm.releases.hashicorp.com"
chart = "vault"
set {
name = "server.volumes[0].name"
value = "userconfig-my-gcp-iam"
}
set {
name = "server.volumes[0].secret.defaultMode"
value = "420"
}
set {
name = "server.volumes[0].secret.secretName"
value = "my-gcp-iam"
}
set {
name = "server.volumeMounts[0].mountPath"
value = "/vault/userconfig/my-gcp-iam"
}
set {
name = "server.volumeMounts[0].name"
value = "userconfig-my-gcp-iam"
}
set {
name = "server.volumeMounts[0].readOnly"
value = "true"
}
}
```
</CodeBlockConfig>
</CodeTabs>
### Annotations
Annotations can be set as a YAML map:
<CodeTabs>
<CodeBlockConfig>
```yaml
server:
ingress:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: true
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: apps-subnet
```
</CodeBlockConfig>
<CodeBlockConfig>
```hcl
set {
name = "server.ingress.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "server.ingress.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-internal-subnet"
value = "apps-subnet"
}
```
</CodeBlockConfig>
</CodeTabs>
or as a multi-line string:
<CodeTabs>
<CodeBlockConfig>
```yaml
server:
ingress:
annotations: |
service.beta.kubernetes.io/azure-load-balancer-internal: true
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: apps-subnet
```
</CodeBlockConfig>
<CodeBlockConfig>
```hcl
set {
name = "server.ingress.annotations"
value = yamlencode({
"service.beta.kubernetes.io/azure-load-balancer-internal": "true"
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet": "apps-subnet"
})
type = "auto"
}
```
</CodeBlockConfig>
</CodeTabs> | vault | layout docs page title Configure Vault Helm using Terraform sidebar current docs platform k8s terraform description Describes how to configure the Vault Helm chart using Terraform Configuring Vault helm with terraform Terraform may also be used to configure and deploy the Vault Helm chart by using the Helm provider https registry terraform io providers hashicorp helm latest docs For example to configure the chart to deploy HA Vault with integrated storage raft vault docs platform k8s helm examples ha with raft the values overrides can be set on the command line in a values yaml file or with a Terraform configuration CodeTabs CodeBlockConfig shell session helm install vault hashicorp vault set server ha enabled true set server ha raft enabled true CodeBlockConfig CodeBlockConfig yaml server ha enabled true raft enabled true CodeBlockConfig CodeBlockConfig hcl provider helm kubernetes config path kube config resource helm release vault name vault repository https helm releases hashicorp com chart vault set name server ha enabled value true set name server ha raft enabled value true CodeBlockConfig CodeTabs The values file can also be used directly in the Terraform configuration with the values directive https registry terraform io providers hashicorp helm latest docs resources release values values Further examples Vault config as a multi line string CodeTabs CodeBlockConfig yaml server ha enabled true raft enabled true setNodeId true config ui false listener tcp tls disable 1 address 8200 cluster address 8201 storage raft path vault data service registration kubernetes seal awskms region us west 2 kms key id alias my kms key CodeBlockConfig CodeBlockConfig hcl resource helm release vault name vault repository https helm releases hashicorp com chart vault set name server ha enabled value true set name server ha raft enabled value true set name server ha raft setNodeId value true set name server ha raft config value EOT ui false listener tcp tls disable 1 address 8200 cluster address 8201 storage raft path vault data service registration kubernetes seal awskms region us west 2 kms key id alias my kms key EOT CodeBlockConfig CodeTabs Lists of volumes and volumeMounts CodeTabs CodeBlockConfig yaml server volumes name userconfig my gcp iam secret defaultMode 420 secretName my gcp iam volumeMounts mountPath vault userconfig my gcp iam name userconfig my gcp iam readOnly true CodeBlockConfig CodeBlockConfig hcl resource helm release vault name vault repository https helm releases hashicorp com chart vault set name server volumes 0 name value userconfig my gcp iam set name server volumes 0 secret defaultMode value 420 set name server volumes 0 secret secretName value my gcp iam set name server volumeMounts 0 mountPath value vault userconfig my gcp iam set name server volumeMounts 0 name value userconfig my gcp iam set name server volumeMounts 0 readOnly value true CodeBlockConfig CodeTabs Annotations Annotations can be set as a YAML map CodeTabs CodeBlockConfig yaml server ingress annotations service beta kubernetes io azure load balancer internal true service beta kubernetes io azure load balancer internal subnet apps subnet CodeBlockConfig CodeBlockConfig hcl set name server ingress annotations service beta kubernetes io azure load balancer internal value true set name server ingress annotations service beta kubernetes io azure load balancer internal subnet value apps subnet CodeBlockConfig CodeTabs or as a multi line string CodeTabs CodeBlockConfig yaml server ingress annotations service beta kubernetes io azure load balancer internal true service beta kubernetes io azure load balancer internal subnet apps subnet CodeBlockConfig CodeBlockConfig hcl set name server ingress annotations value yamlencode service beta kubernetes io azure load balancer internal true service beta kubernetes io azure load balancer internal subnet apps subnet type auto CodeBlockConfig CodeTabs |
vault page title Running Vault OpenShift Run Vault on OpenShift layout docs pure OpenShift workloads this enables Vault to also exist purely within Vault can run directly on OpenShift in various configurations For Kubernetes | ---
layout: docs
page_title: Running Vault - OpenShift
description: >-
Vault can run directly on OpenShift in various configurations. For
pure-OpenShift workloads, this enables Vault to also exist purely within
Kubernetes.
---
# Run Vault on OpenShift
@include 'helm/version.mdx'
The following documentation describes installing, running, and using
Vault and **Vault Agent Injector** on OpenShift.
~> **Note:** We recommend using the Vault agent injector on Openshift
instead of the Secrets Store CSI driver. OpenShift
[does not recommend](https://docs.openshift.com/container-platform/4.9/storage/persistent_storage/persistent-storage-hostpath.html)
using `hostPath` mounting in production or
[certify Helm charts](https://github.com/redhat-certification/chart-verifier/blob/dbf89bff2d09142e4709d689a9f4037a739c2244/docs/helm-chart-checks.md#table-2-helm-chart-default-checks)
using CSI objects because pods must run as privileged. If you would like to run the Secrets Store
CSI driver on a development or testing cluster, refer to
[installation instructions for the Vault CSI provider](/vault/docs/platform/k8s/csi/installation).
## Requirements
The following are required to install Vault and Vault Agent Injector
on OpenShift:
- Cluster Admin privileges to bind the `auth-delegator` role to Vault's service account
- Helm v3.6+
- OpenShift 4.3+
- Vault Helm v0.6.0+
- Vault K8s v0.4.0+
~> **Note:** Support for Consul on OpenShift is available since [Consul 1.9](https://www.hashicorp.com/blog/introducing-openshift-support-for-consul-on-kubernetes). However, for highly available
deployments, Raft integrated storage is recommended.
## Additional resources
The documentation, configuration and examples for Vault Helm and Vault K8s Agent Injector
are applicable to OpenShift installations. For more examples see the existing documentation:
- [Vault Helm documentation](/vault/docs/platform/k8s/helm)
- [Vault K8s documentation](/vault/docs/platform/k8s/injector)
## Helm chart
The [Vault Helm chart](https://github.com/hashicorp/vault-helm)
is the recommended way to install and configure Vault on OpenShift.
In addition to running Vault itself, the Helm chart is the primary
method for installing and configuring Vault Agent Injection Mutating
Webhook.
While the Helm chart automatically sets up complex resources and exposes the
configuration to meet your requirements, it **does not automatically operate
Vault.** You are still responsible for learning how to monitor, backup, upgrade,
etc. the Vault cluster.
~> **Security Warning:** By default, the chart runs in standalone mode. This
mode uses a single Vault server with a file storage backend. This is a less
secure and less resilient installation that is **NOT** appropriate for a
production setup. It is highly recommended to use a [properly secured Kubernetes
cluster](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/),
[learn the available configuration
options](/vault/docs/platform/k8s/helm/configuration), and read the [production deployment
checklist](/vault/docs/platform/k8s/helm/run#architecture).
## How-To
### Install Vault
To use the Helm chart, add the Hashicorp helm repository and check that you have
access to the chart:
@include 'helm/repo.mdx'
-> **Important:** The Helm chart is new and under significant development.
Please always run Helm with `--dry-run` before any install or upgrade to verify
changes.
Use `helm install` to install the latest release of the Vault Helm chart.
```shell-session
$ helm install vault hashicorp/vault
```
Or install a specific version of the chart.
@include 'helm/install.mdx'
The `helm install` command accepts parameters to override default configuration
values inline or defined in a file. For all OpenShift deployments, `global.openshift`
should be set to `true`.
Override the `server.dev.enabled` configuration value:
```shell-session
$ helm install vault hashicorp/vault \
--set "global.openshift=true" \
--set "server.dev.enabled=true"
```
Override all the configuration found in a file:
```shell-session
$ cat override-values.yml
global:
openshift: true
server:
ha:
enabled: true
replicas: 5
##
$ helm install vault hashicorp/vault \
--values override-values.yml
```
#### Dev mode
The Helm chart may run a Vault server in development. This installs a single
Vault server with a memory storage backend.
-> **Dev mode:** This is ideal for learning and demonstration environments but
NOT recommended for a production environment.
Install the latest Vault Helm chart in development mode.
```shell-session
$ helm install vault hashicorp/vault \
--set "global.openshift=true" \
--set "server.dev.enabled=true"
```
#### Highly available raft mode
The following creates a Vault cluster using the Raft integrated storage backend.
Install the latest Vault Helm chart in HA Raft mode:
```shell-session
$ helm install vault hashicorp/vault \
--set='global.openshift=true' \
--set='server.ha.enabled=true' \
--set='server.ha.raft.enabled=true'
```
Next, initialize and unseal `vault-0` pod:
```shell-session
$ oc exec -ti vault-0 -- vault operator init
$ oc exec -ti vault-0 -- vault operator unseal
```
Finally, join the remaining pods to the Raft cluster and unseal them. The pods
will need to communicate directly so we'll configure the pods to use the internal
service provided by the Helm chart:
```shell-session
$ oc exec -ti vault-1 -- vault operator raft join http://vault-0.vault-internal:8200
$ oc exec -ti vault-1 -- vault operator unseal
$ oc exec -ti vault-2 -- vault operator raft join http://vault-0.vault-internal:8200
$ oc exec -ti vault-2 -- vault operator unseal
```
To verify if the Raft cluster has successfully been initialized, run the following.
First, login using the `root` token on the `vault-0` pod:
```shell-session
$ oc exec -ti vault-0 -- vault login
```
Next, list all the raft peers:
```shell-session
$ oc exec -ti vault-0 -- vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
a1799962-8711-7f28-23f0-cea05c8a527d vault-0.vault-internal:8201 leader true
e6876c97-aaaa-a92e-b99a-0aafab105745 vault-1.vault-internal:8201 follower true
4b5d7383-ff31-44df-e008-6a606828823b vault-2.vault-internal:8201 follower true
```
Vault with integrated storage (Raft) is now ready to use!
#### External mode
The Helm chart may be run in external mode. This installs no Vault server and
relies on a network addressable Vault server to exist.
Install the latest Vault Helm chart in external mode.
```shell-session
$ helm install vault hashicorp/vault \
--set "global.openshift=true" \
--set "injector.externalVaultAddr=http://external-vault:8200"
```
## Tutorial
Refer to the [Integrate a Kubernetes Cluster with an
External Vault](/vault/tutorials/kubernetes/kubernetes-external-vault)
tutorial to learn how to use an external Vault within a Kubernetes cluster. | vault | layout docs page title Running Vault OpenShift description Vault can run directly on OpenShift in various configurations For pure OpenShift workloads this enables Vault to also exist purely within Kubernetes Run Vault on OpenShift include helm version mdx The following documentation describes installing running and using Vault and Vault Agent Injector on OpenShift Note We recommend using the Vault agent injector on Openshift instead of the Secrets Store CSI driver OpenShift does not recommend https docs openshift com container platform 4 9 storage persistent storage persistent storage hostpath html using hostPath mounting in production or certify Helm charts https github com redhat certification chart verifier blob dbf89bff2d09142e4709d689a9f4037a739c2244 docs helm chart checks md table 2 helm chart default checks using CSI objects because pods must run as privileged If you would like to run the Secrets Store CSI driver on a development or testing cluster refer to installation instructions for the Vault CSI provider vault docs platform k8s csi installation Requirements The following are required to install Vault and Vault Agent Injector on OpenShift Cluster Admin privileges to bind the auth delegator role to Vault s service account Helm v3 6 OpenShift 4 3 Vault Helm v0 6 0 Vault K8s v0 4 0 Note Support for Consul on OpenShift is available since Consul 1 9 https www hashicorp com blog introducing openshift support for consul on kubernetes However for highly available deployments Raft integrated storage is recommended Additional resources The documentation configuration and examples for Vault Helm and Vault K8s Agent Injector are applicable to OpenShift installations For more examples see the existing documentation Vault Helm documentation vault docs platform k8s helm Vault K8s documentation vault docs platform k8s injector Helm chart The Vault Helm chart https github com hashicorp vault helm is the recommended way to install and configure Vault on OpenShift In addition to running Vault itself the Helm chart is the primary method for installing and configuring Vault Agent Injection Mutating Webhook While the Helm chart automatically sets up complex resources and exposes the configuration to meet your requirements it does not automatically operate Vault You are still responsible for learning how to monitor backup upgrade etc the Vault cluster Security Warning By default the chart runs in standalone mode This mode uses a single Vault server with a file storage backend This is a less secure and less resilient installation that is NOT appropriate for a production setup It is highly recommended to use a properly secured Kubernetes cluster https kubernetes io docs tasks administer cluster securing a cluster learn the available configuration options vault docs platform k8s helm configuration and read the production deployment checklist vault docs platform k8s helm run architecture How To Install Vault To use the Helm chart add the Hashicorp helm repository and check that you have access to the chart include helm repo mdx Important The Helm chart is new and under significant development Please always run Helm with dry run before any install or upgrade to verify changes Use helm install to install the latest release of the Vault Helm chart shell session helm install vault hashicorp vault Or install a specific version of the chart include helm install mdx The helm install command accepts parameters to override default configuration values inline or defined in a file For all OpenShift deployments global openshift should be set to true Override the server dev enabled configuration value shell session helm install vault hashicorp vault set global openshift true set server dev enabled true Override all the configuration found in a file shell session cat override values yml global openshift true server ha enabled true replicas 5 helm install vault hashicorp vault values override values yml Dev mode The Helm chart may run a Vault server in development This installs a single Vault server with a memory storage backend Dev mode This is ideal for learning and demonstration environments but NOT recommended for a production environment Install the latest Vault Helm chart in development mode shell session helm install vault hashicorp vault set global openshift true set server dev enabled true Highly available raft mode The following creates a Vault cluster using the Raft integrated storage backend Install the latest Vault Helm chart in HA Raft mode shell session helm install vault hashicorp vault set global openshift true set server ha enabled true set server ha raft enabled true Next initialize and unseal vault 0 pod shell session oc exec ti vault 0 vault operator init oc exec ti vault 0 vault operator unseal Finally join the remaining pods to the Raft cluster and unseal them The pods will need to communicate directly so we ll configure the pods to use the internal service provided by the Helm chart shell session oc exec ti vault 1 vault operator raft join http vault 0 vault internal 8200 oc exec ti vault 1 vault operator unseal oc exec ti vault 2 vault operator raft join http vault 0 vault internal 8200 oc exec ti vault 2 vault operator unseal To verify if the Raft cluster has successfully been initialized run the following First login using the root token on the vault 0 pod shell session oc exec ti vault 0 vault login Next list all the raft peers shell session oc exec ti vault 0 vault operator raft list peers Node Address State Voter a1799962 8711 7f28 23f0 cea05c8a527d vault 0 vault internal 8201 leader true e6876c97 aaaa a92e b99a 0aafab105745 vault 1 vault internal 8201 follower true 4b5d7383 ff31 44df e008 6a606828823b vault 2 vault internal 8201 follower true Vault with integrated storage Raft is now ready to use External mode The Helm chart may be run in external mode This installs no Vault server and relies on a network addressable Vault server to exist Install the latest Vault Helm chart in external mode shell session helm install vault hashicorp vault set global openshift true set injector externalVaultAddr http external vault 8200 Tutorial Refer to the Integrate a Kubernetes Cluster with an External Vault vault tutorials kubernetes kubernetes external vault tutorial to learn how to use an external Vault within a Kubernetes cluster |
vault Run Vault on kubernetes pure Kubernetes workloads this enables Vault to also exist purely within Vault can run directly on Kubernetes in various configurations For page title Running Vault Kubernetes layout docs Kubernetes | ---
layout: docs
page_title: Running Vault - Kubernetes
description: >-
Vault can run directly on Kubernetes in various configurations. For
pure-Kubernetes workloads, this enables Vault to also exist purely within
Kubernetes.
---
# Run Vault on kubernetes
Vault works with Kubernetes in various modes: `dev`, `standalone`, `ha`,
and `external`.
@include 'helm/version.mdx'
## Helm chart
The [Vault Helm chart](https://github.com/hashicorp/vault-helm)
is the recommended way to install and configure Vault on Kubernetes.
In addition to running Vault itself, the Helm chart is the primary
method for installing and configuring Vault to integrate with other
services such as Consul for High Availability (HA) deployments.
While the Helm chart automatically sets up complex resources and exposes the
configuration to meet your requirements, it **does not automatically operate
Vault.** You are still responsible for learning how to monitor, backup, upgrade,
etc. the Vault cluster.
~> **Security Warning:** By default, the chart runs in standalone mode. This
mode uses a single Vault server with a file storage backend. This is a less
secure and less resilient installation that is **NOT** appropriate for a
production setup. It is highly recommended to use a [properly secured Kubernetes
cluster](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/),
[learn the available configuration
options](/vault/docs/platform/k8s/helm/configuration), and read the [production deployment
checklist](/vault/docs/platform/k8s/helm/run#architecture).
## How-To
### Install Vault
Helm must be installed and configured on your machine. Please refer to the [Helm
documentation](https://helm.sh/) or the [Vault Installation to Minikube via
Helm](/vault/tutorials/kubernetes/kubernetes-minikube-consul) tutorial.
To use the Helm chart, add the Hashicorp helm repository and check that you have
access to the chart:
@include 'helm/repo.mdx'
-> **Important:** The Helm chart is new and under significant development.
Please always run Helm with `--dry-run` before any install or upgrade to verify
changes.
Use `helm install` to install the latest release of the Vault Helm chart.
```shell-session
$ helm install vault hashicorp/vault
```
Or install a specific version of the chart.
@include 'helm/install.mdx'
The `helm install` command accepts parameters to override default configuration
values inline or defined in a file.
Override the `server.dev.enabled` configuration value:
```shell-session
$ helm install vault hashicorp/vault \
--set "server.dev.enabled=true"
```
Override all the configuration found in a file:
```shell-session
$ cat override-values.yml
server:
ha:
enabled: true
replicas: 5
##
$ helm install vault hashicorp/vault \
--values override-values.yml
```
#### Dev mode
The Helm chart may run a Vault server in development. This installs a single
Vault server with a memory storage backend.
-> **Dev mode:** This is ideal for learning and demonstration environments but
NOT recommended for a production environment.
Install the latest Vault Helm chart in development mode.
```shell-session
$ helm install vault hashicorp/vault \
--set "server.dev.enabled=true"
```
#### Standalone mode
The Helm chart defaults to run in `standalone` mode. This installs a single
Vault server with a file storage backend.
Install the latest Vault Helm chart in standalone mode.
```shell-session
$ helm install vault hashicorp/vault
```
#### HA mode
The Helm chart may be run in high availability (HA) mode. This installs three
Vault servers with an existing Consul storage backend. It is suggested that
Consul is installed via the [Consul Helm
chart](https://github.com/hashicorp/consul-k8s).
Install the latest Vault Helm chart in HA mode.
```shell-session
$ helm install vault hashicorp/vault \
--set "server.ha.enabled=true"
```
Refer to the [Vault Installation to Minikube via
Helm](/vault/tutorials/kubernetes/kubernetes-minikube-consul) tutorial
to learn how to set up Consul and Vault in HA mode.
#### External mode
The Helm chart may be run in external mode. This installs no Vault server and
relies on a network addressable Vault server to exist.
Install the latest Vault Helm chart in external mode.
```shell-session
$ helm install vault hashicorp/vault \
--set "injector.externalVaultAddr=http://external-vault:8200"
```
Refer to the [Integrate a Kubernetes Cluster with an
External Vault](/vault/tutorials/kubernetes/kubernetes-external-vault)
tutorial to learn how to use an external Vault within a Kubernetes cluster.
### View the Vault UI
The Vault UI is enabled but NOT exposed as service for security reasons. The
Vault UI can also be exposed via port-forwarding or through a [`ui`
configuration value](/vault/docs/platform/k8s/helm/configuration/#ui).
Expose the Vault UI with port-forwarding:
```shell-session
$ kubectl port-forward vault-0 8200:8200
Forwarding from 127.0.0.1:8200 -> 8200
Forwarding from [::1]:8200 -> 8200
##...
```
### Initialize and unseal Vault
After the Vault Helm chart is installed in `standalone` or `ha` mode one of the
Vault servers need to be
[initialized](/vault/docs/commands/operator/init). The
initialization generates the credentials necessary to
[unseal](/vault/docs/concepts/seal#why) all the Vault
servers.
#### CLI initialize and unseal
View all the Vault pods in the current namespace:
```shell-session
$ kubectl get pods -l app.kubernetes.io/name=vault
NAME READY STATUS RESTARTS AGE
vault-0 0/1 Running 0 1m49s
vault-1 0/1 Running 0 1m49s
vault-2 0/1 Running 0 1m49s
```
Initialize one Vault server with the default number of key shares and default
key threshold:
```shell-session
$ kubectl exec -ti vault-0 -- vault operator init
Unseal Key 1: MBFSDepD9E6whREc6Dj+k3pMaKJ6cCnCUWcySJQymObb
Unseal Key 2: zQj4v22k9ixegS+94HJwmIaWLBL3nZHe1i+b/wHz25fr
Unseal Key 3: 7dbPPeeGGW3SmeBFFo04peCKkXFuuyKc8b2DuntA4VU5
Unseal Key 4: tLt+ME7Z7hYUATfWnuQdfCEgnKA2L173dptAwfmenCdf
Unseal Key 5: vYt9bxLr0+OzJ8m7c7cNMFj7nvdLljj0xWRbpLezFAI9
Initial Root Token: s.zJNwZlRrqISjyBHFMiEca6GF
##...
```
The output displays the key shares and initial root key generated.
Unseal the Vault server with the key shares until the key threshold is met:
```sh
## Unseal the first vault server until it reaches the key threshold
$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 1
$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 2
$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 3
```
Repeat the unseal process for all Vault server pods. When all Vault server pods
are unsealed they report READY `1/1`.
```shell-session
$ kubectl get pods -l app.kubernetes.io/name=vault
NAME READY STATUS RESTARTS AGE
vault-0 1/1 Running 0 1m49s
vault-1 1/1 Running 0 1m49s
vault-2 1/1 Running 0 1m49s
```
#### Google KMS auto unseal
The Helm chart may be run with [Google KMS for Auto
Unseal](/vault/docs/configuration/seal/gcpckms). This enables Vault server pods to
auto unseal if they are rescheduled.
Vault Helm requires the Google Cloud KMS credentials stored in
`credentials.json` and mounted as a secret in each Vault server pod.
##### Create the secret
First, create the secret in Kubernetes:
```bash
kubectl create secret generic kms-creds --from-file=credentials.json
```
Vault Helm mounts this to `/vault/userconfig/kms-creds/credentials.json`.
##### Config example
This is a Vault Helm configuration that uses Google KMS:
```yaml
global:
enabled: true
server:
extraEnvironmentVars:
GOOGLE_REGION: global
GOOGLE_PROJECT: <PROJECT NAME>
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json
volumes:
- name: userconfig-kms-creds
secret:
defaultMode: 420
secretName: kms-creds
volumeMounts:
- mountPath: /vault/userconfig/kms-creds
name: userconfig-kms-creds
readOnly: true
ha:
enabled: true
replicas: 3
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "gcpckms" {
project = "<NAME OF PROJECT>"
region = "global"
key_ring = "<NAME OF KEYRING>"
crypto_key = "<NAME OF KEY>"
}
storage "consul" {
path = "vault"
address = "HOST_IP:8500"
}
```
#### Amazon KMS auto unseal
The Helm chart may be run with [AWS KMS for Auto
Unseal](/vault/docs/configuration/seal/awskms). This enables Vault server pods to auto
unseal if they are rescheduled.
Vault Helm requires the AWS credentials stored as environment variables that
are defined in each Vault server pod.
##### Create the secret
First, create a secret with your KMS access key/secret:
```shell-session
$ kubectl create secret generic kms-creds \
--from-literal=AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID?}" \
--from-literal=AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY?}"
```
##### Config example
This is a Vault Helm configuration that uses AWS KMS:
```yaml
global:
enabled: true
server:
extraSecretEnvironmentVars:
- envName: AWS_ACCESS_KEY_ID
secretName: kms-creds
secretKey: AWS_ACCESS_KEY_ID
- envName: AWS_SECRET_ACCESS_KEY
secretName: kms-creds
secretKey: AWS_SECRET_ACCESS_KEY
ha:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "awskms" {
region = "KMS_REGION_HERE"
kms_key_id = "KMS_KEY_ID_HERE"
}
storage "consul" {
address = "HOST_IP:8500"
path = "vault/"
}
```
### Probes
Probes are essential for detecting failures, rescheduling and using pods in
Kubernetes. The helm chart offers configurable readiness and liveliness probes
which can be customized for a variety of use cases.
Vault's [/sys/health`](/vault/api-docs/system/health) endpoint can be customized to
change the behavior of the health check. For example, we can change the Vault
readiness probe to show the Vault pods are ready even if they're still uninitialized
and sealed using the following probe:
```yaml
server:
readinessProbe:
enabled: true
path: '/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204'
```
Using this customized probe, a `postStart` script could automatically run once the
pod is ready for additional setup.
### Upgrading Vault on kubernetes
To upgrade Vault on Kubernetes, we follow the same pattern as
[generally upgrading Vault](/vault/docs/upgrading), except we can use
the Helm chart to update the Vault server StatefulSet. It is important to understand
how to [generally upgrade Vault](/vault/docs/upgrading) before reading this
section.
The Vault StatefulSet uses `OnDelete` update strategy. It is critical to use `OnDelete` instead
of `RollingUpdate` because standbys must be updated before the active primary. A
failover to an older version of Vault must always be avoided.
!> **IMPORTANT NOTE:** Always back up your data before upgrading! Vault does not
make backward-compatibility guarantees for its data store. Simply replacing the
newly-installed Vault binary with the previous version may not cleanly
downgrade Vault, as upgrades may perform changes to the underlying data
structure that make the data incompatible with a downgrade. If you need to roll
back to a previous version of Vault, you should roll back your data store as
well.
#### Upgrading Vault servers
!> **IMPORTANT NOTE:** Helm will install the latest chart found in a repo by default.
It's recommended to specify the chart version when upgrading.
To initiate the upgrade, set the `server.image` values to the desired Vault
version, either in a values yaml file or on the command line. For illustrative
purposes, the example below uses `vault:123.456`.
```yaml
server:
image:
repository: 'vault'
tag: '123.456'
```
Next, list the Helm versions and choose the desired version to install.
```bash
$ helm search repo hashicorp/vault
NAME CHART VERSION APP VERSION DESCRIPTION
hashicorp/vault 0.29.1 1.18.1 Official HashiCorp Vault Chart
```
Next, test the upgrade with `--dry-run` first to verify the changes sent to the
Kubernetes cluster.
```shell-session
$ helm upgrade vault hashicorp/vault --version=0.29.1 \
--set='server.image.repository=vault' \
--set='server.image.tag=123.456' \
--dry-run
```
This should cause no changes (although the resources are updated). If
everything is stable, `helm upgrade` can be run.
The `helm upgrade` command should have updated the StatefulSet template for
the Vault servers, however, no pods have been deleted. The pods must be manually
deleted to upgrade. Deleting the pods does not delete any persisted data.
If Vault is not deployed using `ha` mode, the single Vault server may be deleted by
running:
```shell-session
$ kubectl delete pod <name of Vault pod>
```
If you deployed Vault in high availability (`ha`) mode, you must upgrade your
standby pods before upgrading the active pod:
1. Before deleting the standby pod, remove the associated node from the raft
with `vault operator raft remove-peer <server_id>`.
1. Confirm Vault removed the node successfully from Raft with
`vault operator raft list-peers`.
1. Once you confirm the removal, delete the pod.
<Warning title="Delete nodes to avoid unnecessary leader elections">
Removing a pod without first deleting the node from its cluster means that
Raft will not be aware of the correct number of nodes in the cluster. Not knowing
the correct number of nodes can trigger a leader election, which can potentially
cause unneeded downtime.
</Warning>
Vault has K8s service discovery built in (when enabled in the server configuration) and
will automatically change the labels of the pod with its current leader status. These labels
can be used to filter the pods.
For example, select all pods that are Vault standbys:
```shell-session
$ kubectl get pods -l vault-active=false
```
Select the active Vault pod:
```shell-session
$ kubectl get pods -l vault-active=true
```
Next, sequentially delete every pod that is not the active primary, ensuring the quorum is maintained at all times:
```shell-session
$ kubectl delete pod <name of Vault pod>
```
If auto-unseal is not being used, the newly scheduled Vault standby pods needs
to be unsealed:
```shell-session
$ kubectl exec -ti <name of pod> -- vault operator unseal
```
Finally, once the standby nodes have been updated and unsealed, delete the active
primary:
```shell-session
$ kubectl delete pod <name of Vault primary>
```
Similar to the standby nodes, the former primary also needs to be unsealed:
```shell-session
$ kubectl exec -ti <name of pod> -- vault operator unseal
```
After a few moments the Vault cluster should elect a new active primary. The Vault
cluster is now upgraded!
### Protecting sensitive Vault configurations
Vault Helm renders a Vault configuration file during installation and stores the
file in a Kubernetes configmap. Some configurations require sensitive data to be
included in the configuration file and would not be encrypted at rest once created
in Kubernetes.
The following example shows how to add extra configuration files to Vault Helm
to protect sensitive configurations from being in plaintext at rest using Kubernetes
secrets.
First, create a partial Vault configuration with the sensitive settings Vault
loads during startup:
```shell-session
$ cat <<EOF >>config.hcl
storage "mysql" {
username = "user1234"
password = "secret123!"
database = "vault"
}
EOF
```
Next, create a Kubernetes secret containing this partial configuration:
```shell-session
$ kubectl create secret generic vault-storage-config \
--from-file=config.hcl
```
Finally, mount this secret as an extra volume and add an additional `-config` flag
to the Vault startup command:
```shell-session
$ helm install vault hashicorp/vault \
--set='server.volumes[0].name=userconfig-vault-storage-config' \
--set='server.volumes[0].secret.defaultMode=420' \
--set='server.volumes[0].secret.secretName=vault-storage-config' \
--set='server.volumeMounts[0].mountPath=/vault/userconfig/vault-storage-config' \
--set='server.volumeMounts[0].name=userconfig-vault-storage-config' \
--set='server.volumeMounts[0].readOnly=true' \
--set='server.extraArgs=-config=/vault/userconfig/vault-storage-config/config.hcl'
```
## Architecture
We recommend running Vault on Kubernetes with the same
[general architecture](/vault/docs/internals/architecture)
as running it anywhere else. There are some benefits Kubernetes can provide
that eases operating a Vault cluster and we document those below. The standard
[production deployment](/vault/tutorials/operations/production-hardening) tutorial is still an
important read even if running Vault within Kubernetes.
### Production deployment checklist
_End-to-End TLS._ Vault should always be used with TLS in production. If
intermediate load balancers or reverse proxies are used to front Vault,
they should not terminate TLS. This way traffic is always encrypted in transit
to Vault and minimizes risks introduced by intermediate layers. See the
[official documentation](/vault/docs/platform/k8s/helm/examples/standalone-tls/)
for example on configuring Vault Helm to use TLS.
_Single Tenancy._ Vault should be the only main process running on a machine.
This reduces the risk that another process running on the same machine is
compromised and can interact with Vault. This can be accomplished by using Vault
Helm's `affinity` configurable. See the
[official documentation](/vault/docs/platform/k8s/helm/examples/ha-with-consul/)
for example on configuring Vault Helm to use affinity rules.
_Enable Auditing._ Vault supports several auditing backends. Enabling auditing
provides a history of all operations performed by Vault and provides a forensics
trail in the case of misuse or compromise. Audit logs securely hash any sensitive
data, but access should still be restricted to prevent any unintended disclosures.
Vault Helm includes a configurable `auditStorage` option that provisions a persistent
volume to store audit logs. See the
[official documentation](/vault/docs/platform/k8s/helm/examples/standalone-audit/)
for an example on configuring Vault Helm to use auditing.
_Immutable Upgrades._ Vault relies on an external storage backend for persistence,
and this decoupling allows the servers running Vault to be managed immutably.
When upgrading to new versions, new servers with the upgraded version of Vault
are brought online. They are attached to the same shared storage backend and
unsealed. Then the old servers are destroyed. This reduces the need for remote
access and upgrade orchestration which may introduce security gaps. See the
[upgrade section](#how-to) for instructions
on upgrading Vault on Kubernetes.
_Upgrade Frequently._ Vault is actively developed, and updating frequently is
important to incorporate security fixes and any changes in default settings such
as key lengths or cipher suites. Subscribe to the Vault mailing list and
GitHub CHANGELOG for updates.
_Restrict Storage Access._ Vault encrypts all data at rest, regardless of which
storage backend is used. Although the data is encrypted, an attacker with arbitrary
control can cause data corruption or loss by modifying or deleting keys. Access
to the storage backend should be restricted to only Vault to avoid unauthorized
access or operations. | vault | layout docs page title Running Vault Kubernetes description Vault can run directly on Kubernetes in various configurations For pure Kubernetes workloads this enables Vault to also exist purely within Kubernetes Run Vault on kubernetes Vault works with Kubernetes in various modes dev standalone ha and external include helm version mdx Helm chart The Vault Helm chart https github com hashicorp vault helm is the recommended way to install and configure Vault on Kubernetes In addition to running Vault itself the Helm chart is the primary method for installing and configuring Vault to integrate with other services such as Consul for High Availability HA deployments While the Helm chart automatically sets up complex resources and exposes the configuration to meet your requirements it does not automatically operate Vault You are still responsible for learning how to monitor backup upgrade etc the Vault cluster Security Warning By default the chart runs in standalone mode This mode uses a single Vault server with a file storage backend This is a less secure and less resilient installation that is NOT appropriate for a production setup It is highly recommended to use a properly secured Kubernetes cluster https kubernetes io docs tasks administer cluster securing a cluster learn the available configuration options vault docs platform k8s helm configuration and read the production deployment checklist vault docs platform k8s helm run architecture How To Install Vault Helm must be installed and configured on your machine Please refer to the Helm documentation https helm sh or the Vault Installation to Minikube via Helm vault tutorials kubernetes kubernetes minikube consul tutorial To use the Helm chart add the Hashicorp helm repository and check that you have access to the chart include helm repo mdx Important The Helm chart is new and under significant development Please always run Helm with dry run before any install or upgrade to verify changes Use helm install to install the latest release of the Vault Helm chart shell session helm install vault hashicorp vault Or install a specific version of the chart include helm install mdx The helm install command accepts parameters to override default configuration values inline or defined in a file Override the server dev enabled configuration value shell session helm install vault hashicorp vault set server dev enabled true Override all the configuration found in a file shell session cat override values yml server ha enabled true replicas 5 helm install vault hashicorp vault values override values yml Dev mode The Helm chart may run a Vault server in development This installs a single Vault server with a memory storage backend Dev mode This is ideal for learning and demonstration environments but NOT recommended for a production environment Install the latest Vault Helm chart in development mode shell session helm install vault hashicorp vault set server dev enabled true Standalone mode The Helm chart defaults to run in standalone mode This installs a single Vault server with a file storage backend Install the latest Vault Helm chart in standalone mode shell session helm install vault hashicorp vault HA mode The Helm chart may be run in high availability HA mode This installs three Vault servers with an existing Consul storage backend It is suggested that Consul is installed via the Consul Helm chart https github com hashicorp consul k8s Install the latest Vault Helm chart in HA mode shell session helm install vault hashicorp vault set server ha enabled true Refer to the Vault Installation to Minikube via Helm vault tutorials kubernetes kubernetes minikube consul tutorial to learn how to set up Consul and Vault in HA mode External mode The Helm chart may be run in external mode This installs no Vault server and relies on a network addressable Vault server to exist Install the latest Vault Helm chart in external mode shell session helm install vault hashicorp vault set injector externalVaultAddr http external vault 8200 Refer to the Integrate a Kubernetes Cluster with an External Vault vault tutorials kubernetes kubernetes external vault tutorial to learn how to use an external Vault within a Kubernetes cluster View the Vault UI The Vault UI is enabled but NOT exposed as service for security reasons The Vault UI can also be exposed via port forwarding or through a ui configuration value vault docs platform k8s helm configuration ui Expose the Vault UI with port forwarding shell session kubectl port forward vault 0 8200 8200 Forwarding from 127 0 0 1 8200 8200 Forwarding from 1 8200 8200 Initialize and unseal Vault After the Vault Helm chart is installed in standalone or ha mode one of the Vault servers need to be initialized vault docs commands operator init The initialization generates the credentials necessary to unseal vault docs concepts seal why all the Vault servers CLI initialize and unseal View all the Vault pods in the current namespace shell session kubectl get pods l app kubernetes io name vault NAME READY STATUS RESTARTS AGE vault 0 0 1 Running 0 1m49s vault 1 0 1 Running 0 1m49s vault 2 0 1 Running 0 1m49s Initialize one Vault server with the default number of key shares and default key threshold shell session kubectl exec ti vault 0 vault operator init Unseal Key 1 MBFSDepD9E6whREc6Dj k3pMaKJ6cCnCUWcySJQymObb Unseal Key 2 zQj4v22k9ixegS 94HJwmIaWLBL3nZHe1i b wHz25fr Unseal Key 3 7dbPPeeGGW3SmeBFFo04peCKkXFuuyKc8b2DuntA4VU5 Unseal Key 4 tLt ME7Z7hYUATfWnuQdfCEgnKA2L173dptAwfmenCdf Unseal Key 5 vYt9bxLr0 OzJ8m7c7cNMFj7nvdLljj0xWRbpLezFAI9 Initial Root Token s zJNwZlRrqISjyBHFMiEca6GF The output displays the key shares and initial root key generated Unseal the Vault server with the key shares until the key threshold is met sh Unseal the first vault server until it reaches the key threshold kubectl exec ti vault 0 vault operator unseal Unseal Key 1 kubectl exec ti vault 0 vault operator unseal Unseal Key 2 kubectl exec ti vault 0 vault operator unseal Unseal Key 3 Repeat the unseal process for all Vault server pods When all Vault server pods are unsealed they report READY 1 1 shell session kubectl get pods l app kubernetes io name vault NAME READY STATUS RESTARTS AGE vault 0 1 1 Running 0 1m49s vault 1 1 1 Running 0 1m49s vault 2 1 1 Running 0 1m49s Google KMS auto unseal The Helm chart may be run with Google KMS for Auto Unseal vault docs configuration seal gcpckms This enables Vault server pods to auto unseal if they are rescheduled Vault Helm requires the Google Cloud KMS credentials stored in credentials json and mounted as a secret in each Vault server pod Create the secret First create the secret in Kubernetes bash kubectl create secret generic kms creds from file credentials json Vault Helm mounts this to vault userconfig kms creds credentials json Config example This is a Vault Helm configuration that uses Google KMS yaml global enabled true server extraEnvironmentVars GOOGLE REGION global GOOGLE PROJECT PROJECT NAME GOOGLE APPLICATION CREDENTIALS vault userconfig kms creds credentials json volumes name userconfig kms creds secret defaultMode 420 secretName kms creds volumeMounts mountPath vault userconfig kms creds name userconfig kms creds readOnly true ha enabled true replicas 3 config ui true listener tcp tls disable 1 address 8200 cluster address 8201 seal gcpckms project NAME OF PROJECT region global key ring NAME OF KEYRING crypto key NAME OF KEY storage consul path vault address HOST IP 8500 Amazon KMS auto unseal The Helm chart may be run with AWS KMS for Auto Unseal vault docs configuration seal awskms This enables Vault server pods to auto unseal if they are rescheduled Vault Helm requires the AWS credentials stored as environment variables that are defined in each Vault server pod Create the secret First create a secret with your KMS access key secret shell session kubectl create secret generic kms creds from literal AWS ACCESS KEY ID AWS ACCESS KEY ID from literal AWS SECRET ACCESS KEY AWS SECRET ACCESS KEY Config example This is a Vault Helm configuration that uses AWS KMS yaml global enabled true server extraSecretEnvironmentVars envName AWS ACCESS KEY ID secretName kms creds secretKey AWS ACCESS KEY ID envName AWS SECRET ACCESS KEY secretName kms creds secretKey AWS SECRET ACCESS KEY ha enabled true config ui true listener tcp tls disable 1 address 8200 cluster address 8201 seal awskms region KMS REGION HERE kms key id KMS KEY ID HERE storage consul address HOST IP 8500 path vault Probes Probes are essential for detecting failures rescheduling and using pods in Kubernetes The helm chart offers configurable readiness and liveliness probes which can be customized for a variety of use cases Vault s sys health vault api docs system health endpoint can be customized to change the behavior of the health check For example we can change the Vault readiness probe to show the Vault pods are ready even if they re still uninitialized and sealed using the following probe yaml server readinessProbe enabled true path v1 sys health standbyok true sealedcode 204 uninitcode 204 Using this customized probe a postStart script could automatically run once the pod is ready for additional setup Upgrading Vault on kubernetes To upgrade Vault on Kubernetes we follow the same pattern as generally upgrading Vault vault docs upgrading except we can use the Helm chart to update the Vault server StatefulSet It is important to understand how to generally upgrade Vault vault docs upgrading before reading this section The Vault StatefulSet uses OnDelete update strategy It is critical to use OnDelete instead of RollingUpdate because standbys must be updated before the active primary A failover to an older version of Vault must always be avoided IMPORTANT NOTE Always back up your data before upgrading Vault does not make backward compatibility guarantees for its data store Simply replacing the newly installed Vault binary with the previous version may not cleanly downgrade Vault as upgrades may perform changes to the underlying data structure that make the data incompatible with a downgrade If you need to roll back to a previous version of Vault you should roll back your data store as well Upgrading Vault servers IMPORTANT NOTE Helm will install the latest chart found in a repo by default It s recommended to specify the chart version when upgrading To initiate the upgrade set the server image values to the desired Vault version either in a values yaml file or on the command line For illustrative purposes the example below uses vault 123 456 yaml server image repository vault tag 123 456 Next list the Helm versions and choose the desired version to install bash helm search repo hashicorp vault NAME CHART VERSION APP VERSION DESCRIPTION hashicorp vault 0 29 1 1 18 1 Official HashiCorp Vault Chart Next test the upgrade with dry run first to verify the changes sent to the Kubernetes cluster shell session helm upgrade vault hashicorp vault version 0 29 1 set server image repository vault set server image tag 123 456 dry run This should cause no changes although the resources are updated If everything is stable helm upgrade can be run The helm upgrade command should have updated the StatefulSet template for the Vault servers however no pods have been deleted The pods must be manually deleted to upgrade Deleting the pods does not delete any persisted data If Vault is not deployed using ha mode the single Vault server may be deleted by running shell session kubectl delete pod name of Vault pod If you deployed Vault in high availability ha mode you must upgrade your standby pods before upgrading the active pod 1 Before deleting the standby pod remove the associated node from the raft with vault operator raft remove peer server id 1 Confirm Vault removed the node successfully from Raft with vault operator raft list peers 1 Once you confirm the removal delete the pod Warning title Delete nodes to avoid unnecessary leader elections Removing a pod without first deleting the node from its cluster means that Raft will not be aware of the correct number of nodes in the cluster Not knowing the correct number of nodes can trigger a leader election which can potentially cause unneeded downtime Warning Vault has K8s service discovery built in when enabled in the server configuration and will automatically change the labels of the pod with its current leader status These labels can be used to filter the pods For example select all pods that are Vault standbys shell session kubectl get pods l vault active false Select the active Vault pod shell session kubectl get pods l vault active true Next sequentially delete every pod that is not the active primary ensuring the quorum is maintained at all times shell session kubectl delete pod name of Vault pod If auto unseal is not being used the newly scheduled Vault standby pods needs to be unsealed shell session kubectl exec ti name of pod vault operator unseal Finally once the standby nodes have been updated and unsealed delete the active primary shell session kubectl delete pod name of Vault primary Similar to the standby nodes the former primary also needs to be unsealed shell session kubectl exec ti name of pod vault operator unseal After a few moments the Vault cluster should elect a new active primary The Vault cluster is now upgraded Protecting sensitive Vault configurations Vault Helm renders a Vault configuration file during installation and stores the file in a Kubernetes configmap Some configurations require sensitive data to be included in the configuration file and would not be encrypted at rest once created in Kubernetes The following example shows how to add extra configuration files to Vault Helm to protect sensitive configurations from being in plaintext at rest using Kubernetes secrets First create a partial Vault configuration with the sensitive settings Vault loads during startup shell session cat EOF config hcl storage mysql username user1234 password secret123 database vault EOF Next create a Kubernetes secret containing this partial configuration shell session kubectl create secret generic vault storage config from file config hcl Finally mount this secret as an extra volume and add an additional config flag to the Vault startup command shell session helm install vault hashicorp vault set server volumes 0 name userconfig vault storage config set server volumes 0 secret defaultMode 420 set server volumes 0 secret secretName vault storage config set server volumeMounts 0 mountPath vault userconfig vault storage config set server volumeMounts 0 name userconfig vault storage config set server volumeMounts 0 readOnly true set server extraArgs config vault userconfig vault storage config config hcl Architecture We recommend running Vault on Kubernetes with the same general architecture vault docs internals architecture as running it anywhere else There are some benefits Kubernetes can provide that eases operating a Vault cluster and we document those below The standard production deployment vault tutorials operations production hardening tutorial is still an important read even if running Vault within Kubernetes Production deployment checklist End to End TLS Vault should always be used with TLS in production If intermediate load balancers or reverse proxies are used to front Vault they should not terminate TLS This way traffic is always encrypted in transit to Vault and minimizes risks introduced by intermediate layers See the official documentation vault docs platform k8s helm examples standalone tls for example on configuring Vault Helm to use TLS Single Tenancy Vault should be the only main process running on a machine This reduces the risk that another process running on the same machine is compromised and can interact with Vault This can be accomplished by using Vault Helm s affinity configurable See the official documentation vault docs platform k8s helm examples ha with consul for example on configuring Vault Helm to use affinity rules Enable Auditing Vault supports several auditing backends Enabling auditing provides a history of all operations performed by Vault and provides a forensics trail in the case of misuse or compromise Audit logs securely hash any sensitive data but access should still be restricted to prevent any unintended disclosures Vault Helm includes a configurable auditStorage option that provisions a persistent volume to store audit logs See the official documentation vault docs platform k8s helm examples standalone audit for an example on configuring Vault Helm to use auditing Immutable Upgrades Vault relies on an external storage backend for persistence and this decoupling allows the servers running Vault to be managed immutably When upgrading to new versions new servers with the upgraded version of Vault are brought online They are attached to the same shared storage backend and unsealed Then the old servers are destroyed This reduces the need for remote access and upgrade orchestration which may introduce security gaps See the upgrade section how to for instructions on upgrading Vault on Kubernetes Upgrade Frequently Vault is actively developed and updating frequently is important to incorporate security fixes and any changes in default settings such as key lengths or cipher suites Subscribe to the Vault mailing list and GitHub CHANGELOG for updates Restrict Storage Access Vault encrypts all data at rest regardless of which storage backend is used Although the data is encrypted an attacker with arbitrary control can cause data corruption or loss by modifying or deleting keys Access to the storage backend should be restricted to only Vault to avoid unauthorized access or operations |
vault Highly available Vault enterprise performance clusters with integrated storage Raft page title Highly Available Vault Enterprise Performance Clusters with Raft Describes how to set up Performance clusters with Integrated Storage Raft layout docs sidebar current docs platform k8s examples enterprise perf with raft | ---
layout: 'docs'
page_title: 'Highly Available Vault Enterprise Performance Clusters with Raft'
sidebar_current: 'docs-platform-k8s-examples-enterprise-perf-with-raft'
description: |-
Describes how to set up Performance clusters with Integrated Storage (Raft)
---
# Highly available Vault enterprise performance clusters with integrated storage (Raft)
@include 'helm/version.mdx'
The following is an example of creating a performance cluster using Vault Helm.
For more information on Disaster Recovery, [see the official documentation](/vault/docs/enterprise/replication/).
-> For license configuration refer to [Running Vault Enterprise](/vault/docs/platform/k8s/helm/enterprise).
## Primary cluster
First, create the primary cluster:
```shell
helm install vault-primary hashicorp/vault \
--set='server.image.repository=hashicorp/vault-enterprise' \
--set='server.image.tag=1.18.1-ent' \
--set='server.ha.enabled=true' \
--set='server.ha.raft.enabled=true'
```
Next, initialize and unseal `vault-primary-0` pod:
```shell
kubectl exec -ti vault-primary-0 -- vault operator init
kubectl exec -ti vault-primary-0 -- vault operator unseal
```
Finally, join the remaining pods to the Raft cluster and unseal them. The pods
will need to communicate directly so we'll configure the pods to use the internal
service provided by the Helm chart:
```shell
kubectl exec -ti vault-primary-1 -- vault operator raft join http://vault-primary-0.vault-primary-internal:8200
kubectl exec -ti vault-primary-1 -- vault operator unseal
kubectl exec -ti vault-primary-2 -- vault operator raft join http://vault-primary-0.vault-primary-internal:8200
kubectl exec -ti vault-primary-2 -- vault operator unseal
```
To verify if the Raft cluster has successfully been initialized, run the following.
First, login using the `root` token on the `vault-primary-0` pod:
```shell
kubectl exec -ti vault-primary-0 -- vault login
```
Next, list all the raft peers:
```shell
$ kubectl exec -ti vault-primary-0 -- vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
a1799962-8711-7f28-23f0-cea05c8a527d vault-primary-0.vault-primary-internal:8201 leader true
e6876c97-aaaa-a92e-b99a-0aafab105745 vault-primary-1.vault-primary-internal:8201 follower true
4b5d7383-ff31-44df-e008-6a606828823b vault-primary-2.vault-primary-internal:8201 follower true
```
## Secondary cluster
With the primary cluster created, next create a secondary cluster.
```shell
helm install vault-secondary hashicorp/vault \
--set='server.image.repository=hashicorp/vault-enterprise' \
--set='server.image.tag=1.18.1-ent' \
--set='server.ha.enabled=true' \
--set='server.ha.raft.enabled=true'
```
Next, initialize and unseal `vault-secondary-0` pod:
```shell
kubectl exec -ti vault-secondary-0 -- vault operator init
kubectl exec -ti vault-secondary-0 -- vault operator unseal
```
Finally, join the remaining pods to the Raft cluster and unseal them. The pods
will need to communicate directly so we'll configure the pods to use the internal
service provided by the Helm chart:
```shell
kubectl exec -ti vault-secondary-1 -- vault operator raft join http://vault-secondary-0.vault-secondary-internal:8200
kubectl exec -ti vault-secondary-1 -- vault operator unseal
kubectl exec -ti vault-secondary-2 -- vault operator raft join http://vault-secondary-0.vault-secondary-internal:8200
kubectl exec -ti vault-secondary-2 -- vault operator unseal
```
To verify if the Raft cluster has successfully been initialized, run the following.
First, login using the `root` token on the `vault-secondary-0` pod:
```shell
kubectl exec -ti vault-secondary-0 -- vault login
```
Next, list all the raft peers:
```shell
$ kubectl exec -ti vault-secondary-0 -- vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
a1799962-8711-7f28-23f0-cea05c8a527d vault-secondary-0.vault-secondary-internal:8201 leader true
e6876c97-aaaa-a92e-b99a-0aafab105745 vault-secondary-1.vault-secondary-internal:8201 follower true
4b5d7383-ff31-44df-e008-6a606828823b vault-secondary-2.vault-secondary-internal:8201 follower true
```
## Enable performance replication on primary
With the initial clusters setup, we can now configure them for Performance Replication.
First, on the primary cluster, enable replication:
```shell
kubectl exec -ti vault-primary-0 -- vault write -f sys/replication/performance/primary/enable primary_cluster_addr=https://vault-primary-active:8201
```
Next, create a token the secondary cluster will use to configure replication:
```shell
kubectl exec -ti vault-primary-0 -- vault write sys/replication/performance/primary/secondary-token id=secondary
```
The token in the output will be used when configuring the secondary cluster.
## Enable performance replication on secondary
Using the token created in the last step, enable Performance Replication on the secondary:
```shell
kubectl exec -ti vault-secondary-0 -- vault write sys/replication/performance/secondary/enable token=<TOKEN FROM PRIMARY>
```
Last, delete the remainder secondary pods and unseal them using the primary unseal token
after Kubernetes reschedules them:
```shell
kubectl delete pod vault-secondary-1
kubectl exec -ti vault-secondary-1 -- vault operator unseal <PRIMARY UNSEAL TOKEN>
kubectl delete pod vault-secondary-2
kubectl exec -ti vault-secondary-2 -- vault operator unseal <PRIMARY UNSEAL TOKEN>
``` | vault | layout docs page title Highly Available Vault Enterprise Performance Clusters with Raft sidebar current docs platform k8s examples enterprise perf with raft description Describes how to set up Performance clusters with Integrated Storage Raft Highly available Vault enterprise performance clusters with integrated storage Raft include helm version mdx The following is an example of creating a performance cluster using Vault Helm For more information on Disaster Recovery see the official documentation vault docs enterprise replication For license configuration refer to Running Vault Enterprise vault docs platform k8s helm enterprise Primary cluster First create the primary cluster shell helm install vault primary hashicorp vault set server image repository hashicorp vault enterprise set server image tag 1 18 1 ent set server ha enabled true set server ha raft enabled true Next initialize and unseal vault primary 0 pod shell kubectl exec ti vault primary 0 vault operator init kubectl exec ti vault primary 0 vault operator unseal Finally join the remaining pods to the Raft cluster and unseal them The pods will need to communicate directly so we ll configure the pods to use the internal service provided by the Helm chart shell kubectl exec ti vault primary 1 vault operator raft join http vault primary 0 vault primary internal 8200 kubectl exec ti vault primary 1 vault operator unseal kubectl exec ti vault primary 2 vault operator raft join http vault primary 0 vault primary internal 8200 kubectl exec ti vault primary 2 vault operator unseal To verify if the Raft cluster has successfully been initialized run the following First login using the root token on the vault primary 0 pod shell kubectl exec ti vault primary 0 vault login Next list all the raft peers shell kubectl exec ti vault primary 0 vault operator raft list peers Node Address State Voter a1799962 8711 7f28 23f0 cea05c8a527d vault primary 0 vault primary internal 8201 leader true e6876c97 aaaa a92e b99a 0aafab105745 vault primary 1 vault primary internal 8201 follower true 4b5d7383 ff31 44df e008 6a606828823b vault primary 2 vault primary internal 8201 follower true Secondary cluster With the primary cluster created next create a secondary cluster shell helm install vault secondary hashicorp vault set server image repository hashicorp vault enterprise set server image tag 1 18 1 ent set server ha enabled true set server ha raft enabled true Next initialize and unseal vault secondary 0 pod shell kubectl exec ti vault secondary 0 vault operator init kubectl exec ti vault secondary 0 vault operator unseal Finally join the remaining pods to the Raft cluster and unseal them The pods will need to communicate directly so we ll configure the pods to use the internal service provided by the Helm chart shell kubectl exec ti vault secondary 1 vault operator raft join http vault secondary 0 vault secondary internal 8200 kubectl exec ti vault secondary 1 vault operator unseal kubectl exec ti vault secondary 2 vault operator raft join http vault secondary 0 vault secondary internal 8200 kubectl exec ti vault secondary 2 vault operator unseal To verify if the Raft cluster has successfully been initialized run the following First login using the root token on the vault secondary 0 pod shell kubectl exec ti vault secondary 0 vault login Next list all the raft peers shell kubectl exec ti vault secondary 0 vault operator raft list peers Node Address State Voter a1799962 8711 7f28 23f0 cea05c8a527d vault secondary 0 vault secondary internal 8201 leader true e6876c97 aaaa a92e b99a 0aafab105745 vault secondary 1 vault secondary internal 8201 follower true 4b5d7383 ff31 44df e008 6a606828823b vault secondary 2 vault secondary internal 8201 follower true Enable performance replication on primary With the initial clusters setup we can now configure them for Performance Replication First on the primary cluster enable replication shell kubectl exec ti vault primary 0 vault write f sys replication performance primary enable primary cluster addr https vault primary active 8201 Next create a token the secondary cluster will use to configure replication shell kubectl exec ti vault primary 0 vault write sys replication performance primary secondary token id secondary The token in the output will be used when configuring the secondary cluster Enable performance replication on secondary Using the token created in the last step enable Performance Replication on the secondary shell kubectl exec ti vault secondary 0 vault write sys replication performance secondary enable token TOKEN FROM PRIMARY Last delete the remainder secondary pods and unseal them using the primary unseal token after Kubernetes reschedules them shell kubectl delete pod vault secondary 1 kubectl exec ti vault secondary 1 vault operator unseal PRIMARY UNSEAL TOKEN kubectl delete pod vault secondary 2 kubectl exec ti vault secondary 2 vault operator unseal PRIMARY UNSEAL TOKEN |
vault sidebar current docs platform k8s examples ha tls Describes how to set up a Raft HA Vault cluster with TLS certificate HA Cluster with Raft and TLS layout docs page title HA Cluster with Raft and TLS | ---
layout: 'docs'
page_title: 'HA Cluster with Raft and TLS'
sidebar_current: 'docs-platform-k8s-examples-ha-tls'
description: |-
Describes how to set up a Raft HA Vault cluster with TLS certificate
---
# HA Cluster with Raft and TLS
The overview for [Integrated Storage and
TLS](/vault/docs/concepts/integrated-storage#integrated-storage-and-tls) covers
the various options for mitigating TLS verification warnings and bootstrapping
your Raft cluster.
Without proper configuration, you will see the following warning before cluster
initialization:
```shell
core: join attempt failed: error="error during raft bootstrap init call: Put "https://vault-${N}.${SERVICE}:8200/v1/sys/storage/raft/bootstrap/challenge": x509: certificate is valid for ${SERVICE}, ${SERVICE}.${NAMESPACE}, ${SERVICE}.${NAMESPACE}.svc, ${SERVICE}.${NAMESPACE}.svc.cluster.local, not vault-${N}.${SERVICE}"
```
The examples below demonstrate two specific solutions. Both solutions ensure
that the common name (CN) used for the `leader_api_addr` in the Raft stanza
matches the name(s) listed in the TLS certificate.
## Before you start
1. Follow the steps from the example [HA Vault Cluster with Integrated
Storage](/vault/docs/platform/k8s/helm/examples/ha-with-raft) to build the cluster.
2. Follow the examples and instructions in [Standalone Server with
TLS](/vault/docs/platform/k8s/helm/examples/standalone-tls) to create a TLS
certificate.
## Solution 1: Use auto-join and set the TLS server in your Raft configuration
The join warning disappears if you use auto-join and set the expected TLS
server name (`${CN}`) with
[`leader_tls_servername`](/vault/docs/configuration/storage/raft#leader_tls_servername)
in the Raft stanza for your Vault configuration.
For example:
<CodeBlockConfig highlight="6,14,22">
```hcl
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.${SERVICE}:8200"
leader_tls_servername = "${CN}"
leader_client_cert_file = "/vault/tls/vault.crt"
leader_client_key_file = "/vault/tls/vault.key"
leader_ca_cert_file = "/vault/tls/vault.ca"
}
retry_join {
leader_api_addr = "https://vault-1.${SERVICE}:8200"
leader_tls_servername = "${CN}"
leader_client_cert_file = "/vault/tls/vault.crt"
leader_client_key_file = "/vault/tls/vault.key"
leader_ca_cert_file = "/vault/tls/vault.ca"
}
retry_join {
leader_api_addr = "https://vault-2.${SERVICE}:8200"
leader_tls_servername = "${CN}"
leader_client_cert_file = "/vault/tls/vault.crt"
leader_client_key_file = "/vault/tls/vault.key"
leader_ca_cert_file = "/vault/tls/vault.ca"
}
}
```
</CodeBlockConfig>
## Solution 2: Add a load balancer to your Raft configuration
If you have a load balancer for your Vault cluster, you can add a single
`retry_join` stanza to your Raft configuration and use the load balancer
address for `leader_api_addr`.
For example:
<CodeBlockConfig highlight="5">
```hcl
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-active:8200"
leader_client_cert_file = "/vault/tls/vault.crt"
leader_client_key_file = "/vault/tls/vault.key"
leader_ca_cert_file = "/vault/tls/vault.ca"
}
}
```
</CodeBlockConfig>
| vault | layout docs page title HA Cluster with Raft and TLS sidebar current docs platform k8s examples ha tls description Describes how to set up a Raft HA Vault cluster with TLS certificate HA Cluster with Raft and TLS The overview for Integrated Storage and TLS vault docs concepts integrated storage integrated storage and tls covers the various options for mitigating TLS verification warnings and bootstrapping your Raft cluster Without proper configuration you will see the following warning before cluster initialization shell core join attempt failed error error during raft bootstrap init call Put https vault N SERVICE 8200 v1 sys storage raft bootstrap challenge x509 certificate is valid for SERVICE SERVICE NAMESPACE SERVICE NAMESPACE svc SERVICE NAMESPACE svc cluster local not vault N SERVICE The examples below demonstrate two specific solutions Both solutions ensure that the common name CN used for the leader api addr in the Raft stanza matches the name s listed in the TLS certificate Before you start 1 Follow the steps from the example HA Vault Cluster with Integrated Storage vault docs platform k8s helm examples ha with raft to build the cluster 2 Follow the examples and instructions in Standalone Server with TLS vault docs platform k8s helm examples standalone tls to create a TLS certificate Solution 1 Use auto join and set the TLS server in your Raft configuration The join warning disappears if you use auto join and set the expected TLS server name CN with leader tls servername vault docs configuration storage raft leader tls servername in the Raft stanza for your Vault configuration For example CodeBlockConfig highlight 6 14 22 hcl storage raft path vault data retry join leader api addr https vault 0 SERVICE 8200 leader tls servername CN leader client cert file vault tls vault crt leader client key file vault tls vault key leader ca cert file vault tls vault ca retry join leader api addr https vault 1 SERVICE 8200 leader tls servername CN leader client cert file vault tls vault crt leader client key file vault tls vault key leader ca cert file vault tls vault ca retry join leader api addr https vault 2 SERVICE 8200 leader tls servername CN leader client cert file vault tls vault crt leader client key file vault tls vault key leader ca cert file vault tls vault ca CodeBlockConfig Solution 2 Add a load balancer to your Raft configuration If you have a load balancer for your Vault cluster you can add a single retry join stanza to your Raft configuration and use the load balancer address for leader api addr For example CodeBlockConfig highlight 5 hcl storage raft path vault data retry join leader api addr https vault active 8200 leader client cert file vault tls vault crt leader client key file vault tls vault key leader ca cert file vault tls vault ca CodeBlockConfig |
vault Describes how to set up a standalone Vault with TLS certificate layout docs page title Standalone Server with TLS Standalone server with TLS sidebar current docs platform k8s examples standalone tls | ---
layout: 'docs'
page_title: 'Standalone Server with TLS'
sidebar_current: 'docs-platform-k8s-examples-standalone-tls'
description: |-
Describes how to set up a standalone Vault with TLS certificate
---
# Standalone server with TLS
@include 'helm/version.mdx'
This example can be used to set up a single server Vault cluster using TLS.
1. Create key & certificate using Kubernetes CA
2. Store key & cert into [Kubernetes secrets store](https://kubernetes.io/docs/concepts/configuration/secret/)
3. Configure helm chart to use Kubernetes secret from step 2
## 1. create key & certificate using kubernetes CA
There are four variables that will be used in this example.
```bash
# SERVICE is the name of the Vault service in kubernetes.
# It does not have to match the actual running service, though it may help for consistency.
export SERVICE=vault-server-tls
# NAMESPACE where the Vault service is running.
export NAMESPACE=vault-namespace
# SECRET_NAME to create in the kubernetes secrets store.
export SECRET_NAME=vault-server-tls
# TMPDIR is a temporary working directory.
export TMPDIR=/tmp
# CSR_NAME will be the name of our certificate signing request as seen by kubernetes.
export CSR_NAME=vault-csr
```
1. Create a key for Kubernetes to sign.
```shell-session
$ openssl genrsa -out ${TMPDIR}/vault.key 2048
Generating RSA private key, 2048 bit long modulus
...................................................................................................+++
...............+++
e is 65537 (0x10001)
```
2. Create a Certificate Signing Request (CSR).
1. Create a file `${TMPDIR}/csr.conf` with the following contents:
```bash
cat <<EOF >${TMPDIR}/csr.conf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = *.${SERVICE}
DNS.2 = *.${SERVICE}.${NAMESPACE}
DNS.3 = *.${SERVICE}.${NAMESPACE}.svc
DNS.4 = *.${SERVICE}.${NAMESPACE}.svc.cluster.local
IP.1 = 127.0.0.1
EOF
```
2. Create a CSR.
```bash
openssl req -new \
-key ${TMPDIR}/vault.key \
-subj "/CN=system:node:${SERVICE}.${NAMESPACE}.svc;/O=system:nodes" \
-out ${TMPDIR}/server.csr \
-config ${TMPDIR}/csr.conf
```
3. Create the certificate
~> **Important Note:** If you are using EKS, certificate signing requirements have changed. As per the AWS [certificate signing](https://docs.aws.amazon.com/eks/latest/userguide/cert-signing.html) documentation, EKS version `1.22` and later now requires the `signerName` to be `beta.eks.amazonaws.com/app-serving`, otherwise, the CSR will be approved but the certificate will not be issued.
1. Create a file `${TMPDIR}/csr.yaml` with the following contents:
```bash
cat <<EOF >${TMPDIR}/csr.yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: ${CSR_NAME}
spec:
signerName: kubernetes.io/kubelet-serving
groups:
- system:authenticated
request: $(base64 ${TMPDIR}/server.csr | tr -d '\n')
signerName: kubernetes.io/kubelet-serving
usages:
- digital signature
- key encipherment
- server auth
EOF
```
2. Send the CSR to Kubernetes.
```shell-session
$ kubectl create -f ${TMPDIR}/csr.yaml
certificatesigningrequest.certificates.k8s.io/vault-csr created
```
-> If this process is automated, you may need to wait to ensure the CSR has been received and stored:
`kubectl get csr ${CSR_NAME}`
3. Approve the CSR in Kubernetes.
```shell-session
$ kubectl certificate approve ${CSR_NAME}
certificatesigningrequest.certificates.k8s.io/vault-csr approved
```
4. Verify that the certificate was approved and issued.
```shell-session
$ kubectl get csr ${CSR_NAME}
NAME AGE SIGNERNAME REQUESTOR CONDITION
vault-csr 1m13s kubernetes.io/kubelet-serving kubernetes-admin Approved,Issued
```
## 2. store key, cert, and kubernetes CA into kubernetes secrets store
1. Retrieve the certificate.
```shell-session
$ serverCert=$(kubectl get csr ${CSR_NAME} -o jsonpath='{.status.certificate}')
```
-> If this process is automated, you may need to wait to ensure the certificate has been created.
If it hasn't, this will return an empty string.
2. Write the certificate out to a file.
```shell-session
$ echo "${serverCert}" | openssl base64 -d -A -out ${TMPDIR}/vault.crt
```
3. Retrieve Kubernetes CA.
```bash
kubectl get secret \
-o jsonpath="{.items[?(@.type==\"kubernetes.io/service-account-token\")].data['ca\.crt']}" \
| base64 --decode > ${TMPDIR}/vault.ca
```
4. Create the namespace.
```shell-session
$ kubectl create namespace ${NAMESPACE}
namespace/vault-namespace created
```
5. Store the key, cert, and Kubernetes CA into Kubernetes secrets.
```shell-session
$ kubectl create secret generic ${SECRET_NAME} \
--namespace ${NAMESPACE} \
--from-file=vault.key=${TMPDIR}/vault.key \
--from-file=vault.crt=${TMPDIR}/vault.crt \
--from-file=vault.ca=${TMPDIR}/vault.ca
# secret/vault-server-tls created
```
## 3. helm configuration
The below `custom-values.yaml` can be used to set up a single server Vault cluster using TLS.
This assumes that a Kubernetes `secret` exists with the server certificate, key and
certificate authority:
```yaml
global:
enabled: true
tlsDisable: false
server:
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
volumes:
- name: userconfig-vault-server-tls
secret:
defaultMode: 420
secretName: vault-server-tls # Matches the ${SECRET_NAME} from above
volumeMounts:
- mountPath: /vault/userconfig/vault-server-tls
name: userconfig-vault-server-tls
readOnly: true
standalone:
enabled: true
config: |
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
storage "file" {
path = "/vault/data"
}
``` | vault | layout docs page title Standalone Server with TLS sidebar current docs platform k8s examples standalone tls description Describes how to set up a standalone Vault with TLS certificate Standalone server with TLS include helm version mdx This example can be used to set up a single server Vault cluster using TLS 1 Create key certificate using Kubernetes CA 2 Store key cert into Kubernetes secrets store https kubernetes io docs concepts configuration secret 3 Configure helm chart to use Kubernetes secret from step 2 1 create key certificate using kubernetes CA There are four variables that will be used in this example bash SERVICE is the name of the Vault service in kubernetes It does not have to match the actual running service though it may help for consistency export SERVICE vault server tls NAMESPACE where the Vault service is running export NAMESPACE vault namespace SECRET NAME to create in the kubernetes secrets store export SECRET NAME vault server tls TMPDIR is a temporary working directory export TMPDIR tmp CSR NAME will be the name of our certificate signing request as seen by kubernetes export CSR NAME vault csr 1 Create a key for Kubernetes to sign shell session openssl genrsa out TMPDIR vault key 2048 Generating RSA private key 2048 bit long modulus e is 65537 0x10001 2 Create a Certificate Signing Request CSR 1 Create a file TMPDIR csr conf with the following contents bash cat EOF TMPDIR csr conf req req extensions v3 req distinguished name req distinguished name req distinguished name v3 req basicConstraints CA FALSE keyUsage nonRepudiation digitalSignature keyEncipherment extendedKeyUsage serverAuth subjectAltName alt names alt names DNS 1 SERVICE DNS 2 SERVICE NAMESPACE DNS 3 SERVICE NAMESPACE svc DNS 4 SERVICE NAMESPACE svc cluster local IP 1 127 0 0 1 EOF 2 Create a CSR bash openssl req new key TMPDIR vault key subj CN system node SERVICE NAMESPACE svc O system nodes out TMPDIR server csr config TMPDIR csr conf 3 Create the certificate Important Note If you are using EKS certificate signing requirements have changed As per the AWS certificate signing https docs aws amazon com eks latest userguide cert signing html documentation EKS version 1 22 and later now requires the signerName to be beta eks amazonaws com app serving otherwise the CSR will be approved but the certificate will not be issued 1 Create a file TMPDIR csr yaml with the following contents bash cat EOF TMPDIR csr yaml apiVersion certificates k8s io v1 kind CertificateSigningRequest metadata name CSR NAME spec signerName kubernetes io kubelet serving groups system authenticated request base64 TMPDIR server csr tr d n signerName kubernetes io kubelet serving usages digital signature key encipherment server auth EOF 2 Send the CSR to Kubernetes shell session kubectl create f TMPDIR csr yaml certificatesigningrequest certificates k8s io vault csr created If this process is automated you may need to wait to ensure the CSR has been received and stored kubectl get csr CSR NAME 3 Approve the CSR in Kubernetes shell session kubectl certificate approve CSR NAME certificatesigningrequest certificates k8s io vault csr approved 4 Verify that the certificate was approved and issued shell session kubectl get csr CSR NAME NAME AGE SIGNERNAME REQUESTOR CONDITION vault csr 1m13s kubernetes io kubelet serving kubernetes admin Approved Issued 2 store key cert and kubernetes CA into kubernetes secrets store 1 Retrieve the certificate shell session serverCert kubectl get csr CSR NAME o jsonpath status certificate If this process is automated you may need to wait to ensure the certificate has been created If it hasn t this will return an empty string 2 Write the certificate out to a file shell session echo serverCert openssl base64 d A out TMPDIR vault crt 3 Retrieve Kubernetes CA bash kubectl get secret o jsonpath items type kubernetes io service account token data ca crt base64 decode TMPDIR vault ca 4 Create the namespace shell session kubectl create namespace NAMESPACE namespace vault namespace created 5 Store the key cert and Kubernetes CA into Kubernetes secrets shell session kubectl create secret generic SECRET NAME namespace NAMESPACE from file vault key TMPDIR vault key from file vault crt TMPDIR vault crt from file vault ca TMPDIR vault ca secret vault server tls created 3 helm configuration The below custom values yaml can be used to set up a single server Vault cluster using TLS This assumes that a Kubernetes secret exists with the server certificate key and certificate authority yaml global enabled true tlsDisable false server extraEnvironmentVars VAULT CACERT vault userconfig vault server tls vault ca volumes name userconfig vault server tls secret defaultMode 420 secretName vault server tls Matches the SECRET NAME from above volumeMounts mountPath vault userconfig vault server tls name userconfig vault server tls readOnly true standalone enabled true config listener tcp address 8200 cluster address 8201 tls cert file vault userconfig vault server tls vault crt tls key file vault userconfig vault server tls vault key tls client ca file vault userconfig vault server tls vault ca storage file path vault data |
vault Describes how to set up Diaster Recovery clusters with Integrated Storage Raft page title Highly Available Vault Enterprise Disaster Recovery Clusters with Raft layout docs sidebar current docs platform k8s examples enterprise dr with raft Highly available Vault enterprise disaster recovery clusters with integrated storage Raft | ---
layout: 'docs'
page_title: 'Highly Available Vault Enterprise Disaster Recovery Clusters with Raft'
sidebar_current: 'docs-platform-k8s-examples-enterprise-dr-with-raft'
description: |-
Describes how to set up Diaster Recovery clusters with Integrated Storage (Raft)
---
# Highly available Vault enterprise disaster recovery clusters with integrated storage (Raft)
@include 'helm/version.mdx'
The following is an example of creating a disaster recovery cluster using Vault Helm.
For more information on Disaster Recovery, [see the official documentation](/vault/docs/enterprise/replication/).
-> For license configuration refer to [Running Vault Enterprise](/vault/docs/platform/k8s/helm/enterprise).
## Primary cluster
First, create the primary cluster:
```shell
helm install vault-primary hashicorp/vault \
--set='server.image.repository=hashicorp/vault-enterprise' \
--set='server.image.tag=1.18.1-ent' \
--set='server.ha.enabled=true' \
--set='server.ha.raft.enabled=true'
```
Next, initialize and unseal `vault-primary-0` pod:
```shell
kubectl exec -ti vault-primary-0 -- vault operator init
kubectl exec -ti vault-primary-0 -- vault operator unseal
```
Finally, join the remaining pods to the Raft cluster and unseal them. The pods
will need to communicate directly so we'll configure the pods to use the internal
service provided by the Helm chart:
```shell
kubectl exec -ti vault-primary-1 -- vault operator raft join http://vault-primary-0.vault-primary-internal:8200
kubectl exec -ti vault-primary-1 -- vault operator unseal
kubectl exec -ti vault-primary-2 -- vault operator raft join http://vault-primary-0.vault-primary-internal:8200
kubectl exec -ti vault-primary-2 -- vault operator unseal
```
To verify if the Raft cluster has successfully been initialized, run the following.
First, login using the `root` token on the `vault-primary-0` pod:
```shell
kubectl exec -ti vault-primary-0 -- vault login
```
Next, list all the raft peers:
```shell
$ kubectl exec -ti vault-primary-0 -- vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
a1799962-8711-7f28-23f0-cea05c8a527d vault-primary-0.vault-primary-internal:8201 leader true
e6876c97-aaaa-a92e-b99a-0aafab105745 vault-primary-1.vault-primary-internal:8201 follower true
4b5d7383-ff31-44df-e008-6a606828823b vault-primary-2.vault-primary-internal:8201 follower true
```
## Secondary cluster
With the primary cluster created, next create a secondary cluster and enable
disaster recovery replication.
```shell
helm install vault-secondary hashicorp/vault \
--set='server.image.repository=hashicorp/vault-enterprise' \
--set='server.image.tag=1.18.1-ent' \
--set='server.ha.enabled=true' \
--set='server.ha.raft.enabled=true'
```
Next, initialize and unseal `vault-secondary-0` pod:
```shell
kubectl exec -ti vault-secondary-0 -- vault operator init
kubectl exec -ti vault-secondary-0 -- vault operator unseal
```
Finally, join the remaining pods to the Raft cluster and unseal them. The pods
will need to communicate directly so we'll configure the pods to use the internal
service provided by the Helm chart:
```shell
kubectl exec -ti vault-secondary-1 -- vault operator raft join http://vault-secondary-0.vault-secondary-internal:8200
kubectl exec -ti vault-secondary-1 -- vault operator unseal
kubectl exec -ti vault-secondary-2 -- vault operator raft join http://vault-secondary-0.vault-secondary-internal:8200
kubectl exec -ti vault-secondary-2 -- vault operator unseal
```
To verify if the Raft cluster has successfully been initialized, run the following.
First, login using the `root` token on the `vault-secondary-0` pod:
```shell
kubectl exec -ti vault-secondary-0 -- vault login
```
Next, list all the raft peers:
```shell
$ kubectl exec -ti vault-secondary-0 -- vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
a1799962-8711-7f28-23f0-cea05c8a527d vault-secondary-0.vault-secondary-internal:8201 leader true
e6876c97-aaaa-a92e-b99a-0aafab105745 vault-secondary-1.vault-secondary-internal:8201 follower true
4b5d7383-ff31-44df-e008-6a606828823b vault-secondary-2.vault-secondary-internal:8201 follower true
```
## Enable disaster recovery replication on primary
With the initial clusters setup, we can now configure them for disaster recovery replication.
First, on the primary cluster, enable replication:
```shell
kubectl exec -ti vault-primary-0 -- vault write -f sys/replication/dr/primary/enable primary_cluster_addr=https://vault-primary-active:8201
```
Next, create a token the secondary cluster will use to configure replication:
```shell
kubectl exec -ti vault-primary-0 -- vault write sys/replication/dr/primary/secondary-token id=secondary
```
The token in the output will be used when configuring the secondary cluster.
## Enable disaster recovery replication on secondary
Using the token created in the last step, enable disaster recovery replication on the secondary:
```shell
kubectl exec -ti vault-secondary-0 -- vault write sys/replication/dr/secondary/enable token=<TOKEN FROM PRIMARY>
```
Last, delete the remainder secondary pods and unseal them using the primary unseal token
after Kubernetes reschedules them:
```shell
kubectl delete pod vault-secondary-1
kubectl exec -ti vault-secondary-1 -- vault operator unseal <PRIMARY UNSEAL TOKEN>
kubectl delete pod vault-secondary-2
kubectl exec -ti vault-secondary-2 -- vault operator unseal <PRIMARY UNSEAL TOKEN>
``` | vault | layout docs page title Highly Available Vault Enterprise Disaster Recovery Clusters with Raft sidebar current docs platform k8s examples enterprise dr with raft description Describes how to set up Diaster Recovery clusters with Integrated Storage Raft Highly available Vault enterprise disaster recovery clusters with integrated storage Raft include helm version mdx The following is an example of creating a disaster recovery cluster using Vault Helm For more information on Disaster Recovery see the official documentation vault docs enterprise replication For license configuration refer to Running Vault Enterprise vault docs platform k8s helm enterprise Primary cluster First create the primary cluster shell helm install vault primary hashicorp vault set server image repository hashicorp vault enterprise set server image tag 1 18 1 ent set server ha enabled true set server ha raft enabled true Next initialize and unseal vault primary 0 pod shell kubectl exec ti vault primary 0 vault operator init kubectl exec ti vault primary 0 vault operator unseal Finally join the remaining pods to the Raft cluster and unseal them The pods will need to communicate directly so we ll configure the pods to use the internal service provided by the Helm chart shell kubectl exec ti vault primary 1 vault operator raft join http vault primary 0 vault primary internal 8200 kubectl exec ti vault primary 1 vault operator unseal kubectl exec ti vault primary 2 vault operator raft join http vault primary 0 vault primary internal 8200 kubectl exec ti vault primary 2 vault operator unseal To verify if the Raft cluster has successfully been initialized run the following First login using the root token on the vault primary 0 pod shell kubectl exec ti vault primary 0 vault login Next list all the raft peers shell kubectl exec ti vault primary 0 vault operator raft list peers Node Address State Voter a1799962 8711 7f28 23f0 cea05c8a527d vault primary 0 vault primary internal 8201 leader true e6876c97 aaaa a92e b99a 0aafab105745 vault primary 1 vault primary internal 8201 follower true 4b5d7383 ff31 44df e008 6a606828823b vault primary 2 vault primary internal 8201 follower true Secondary cluster With the primary cluster created next create a secondary cluster and enable disaster recovery replication shell helm install vault secondary hashicorp vault set server image repository hashicorp vault enterprise set server image tag 1 18 1 ent set server ha enabled true set server ha raft enabled true Next initialize and unseal vault secondary 0 pod shell kubectl exec ti vault secondary 0 vault operator init kubectl exec ti vault secondary 0 vault operator unseal Finally join the remaining pods to the Raft cluster and unseal them The pods will need to communicate directly so we ll configure the pods to use the internal service provided by the Helm chart shell kubectl exec ti vault secondary 1 vault operator raft join http vault secondary 0 vault secondary internal 8200 kubectl exec ti vault secondary 1 vault operator unseal kubectl exec ti vault secondary 2 vault operator raft join http vault secondary 0 vault secondary internal 8200 kubectl exec ti vault secondary 2 vault operator unseal To verify if the Raft cluster has successfully been initialized run the following First login using the root token on the vault secondary 0 pod shell kubectl exec ti vault secondary 0 vault login Next list all the raft peers shell kubectl exec ti vault secondary 0 vault operator raft list peers Node Address State Voter a1799962 8711 7f28 23f0 cea05c8a527d vault secondary 0 vault secondary internal 8201 leader true e6876c97 aaaa a92e b99a 0aafab105745 vault secondary 1 vault secondary internal 8201 follower true 4b5d7383 ff31 44df e008 6a606828823b vault secondary 2 vault secondary internal 8201 follower true Enable disaster recovery replication on primary With the initial clusters setup we can now configure them for disaster recovery replication First on the primary cluster enable replication shell kubectl exec ti vault primary 0 vault write f sys replication dr primary enable primary cluster addr https vault primary active 8201 Next create a token the secondary cluster will use to configure replication shell kubectl exec ti vault primary 0 vault write sys replication dr primary secondary token id secondary The token in the output will be used when configuring the secondary cluster Enable disaster recovery replication on secondary Using the token created in the last step enable disaster recovery replication on the secondary shell kubectl exec ti vault secondary 0 vault write sys replication dr secondary enable token TOKEN FROM PRIMARY Last delete the remainder secondary pods and unseal them using the primary unseal token after Kubernetes reschedules them shell kubectl delete pod vault secondary 1 kubectl exec ti vault secondary 1 vault operator unseal PRIMARY UNSEAL TOKEN kubectl delete pod vault secondary 2 kubectl exec ti vault secondary 2 vault operator unseal PRIMARY UNSEAL TOKEN |
vault Describes how to set up the Vault Agent Injector with certificates and keys generated by cert manager Vault agent injector TLS with Cert Manager layout docs sidebar current docs platform k8s examples injector tls cert manager page title Vault Agent Injector TLS with Cert Manager | ---
layout: 'docs'
page_title: 'Vault Agent Injector TLS with Cert-Manager'
sidebar_current: 'docs-platform-k8s-examples-injector-tls-cert-manager'
description: |-
Describes how to set up the Vault Agent Injector with certificates and keys generated by cert-manager.
---
# Vault agent injector TLS with Cert-Manager
The following instructions demonstrate how to configure the Vault Agent Injector to use certificates generated by [cert-manager](https://cert-manager.io/). This allows you to run multiple replicas of the Vault Agent Injector in a Kubernetes cluster.
## Prerequisites
Install cert-manager if not already installed (see the [cert-manager documentation](https://cert-manager.io/docs/installation/)). For example, with helm:
```shell
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true
```
## Create a certificate authority (CA)
For this example we will bootstrap a self-signed certificate authority (CA) [Issuer](https://cert-manager.io/docs/configuration/). If you already have a [ClusterIssuer](https://cert-manager.io/docs/concepts/issuer/) configured for your cluster, you may skip this step.
```yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: injector-selfsigned-ca
spec:
isCA: true
commonName: Agent Inject CA
secretName: injector-ca-secret
duration: 87660h # 10 years
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: selfsigned
kind: Issuer
group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: injector-ca-issuer
spec:
ca:
secretName: injector-ca-secret
```
Save that to a file named `ca-issuer.yaml`, and apply to your Kubernetes cluster:
```console
$ kubectl apply -n vault -f ca-issuer.yaml
issuer.cert-manager.io/selfsigned created
certificate.cert-manager.io/injector-selfsigned-ca created
issuer.cert-manager.io/injector-ca-issuer created
$ kubectl -n vault get issuers -o wide
NAME READY STATUS AGE
injector-ca-issuer True Signing CA verified 7s
selfsigned True 7s
$ kubectl -n vault get certificates injector-selfsigned-ca -o wide
NAME READY SECRET ISSUER STATUS AGE
injector-selfsigned-ca True injector-ca-secret selfsigned Certificate is up to date and has not expired 32s
```
## Create the Vault agent injector certificate
Next we can create a request for cert-manager to generate a certificate and key
signed by the certificate authority above. This certificate and key will be used
by the Vault Agent Injector for TLS communications with the Kubernetes API.
The Certificate request object references the CA issuer created above, and specifies the name of the Secret where the CA, Certificate, and Key will be stored by cert-manager.
```yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: injector-certificate
spec:
secretName: injector-tls
duration: 24h
renewBefore: 144m # roughly 10% of 24h
dnsNames:
- vault-agent-injector-svc
- vault-agent-injector-svc.vault
- vault-agent-injector-svc.vault.svc
issuerRef:
name: injector-ca-issuer
commonName: Agent Inject Cert
```
~> **Important Note:** The dnsNames for the certificate must be configured to use the name
of the Vault Agent Injector Kubernetes service and namespace where it is deployed.
In this example the Vault Agent Injector service name is `vault-agent-injector-svc` in the `vault` namespace.
This uses the pattern `<k8s service name>.<k8s namespace>.svc`.
Save the Certificate yaml to a file and apply to your cluster:
```shell
$ kubectl -n vault apply -f injector-certificate.yaml
certificate.cert-manager.io/injector-certificate created
$ kubectl -n vault get certificates injector-certificate -o wide
NAME READY SECRET ISSUER STATUS AGE
injector-certificate True injector-tls injector-ca-issuer Certificate is up to date and has not expired 41s
$ kubectl -n vault get secret injector-tls
NAME TYPE DATA AGE
injector-tls kubernetes.io/tls 3 6m59s
```
## Configuration
Now that a certificate authority and a signed certificate have been created, we can now configure
Helm and the Vault Agent Injector to use them.
Install the Vault Agent Injector with the following custom values:
```shell
$ helm install vault hashicorp/vault \
--namespace=vault \
--set injector.replicas=2 \
--set injector.leaderElector.enabled=false \
--set injector.certs.secretName=injector-tls \
--set injector.webhook.annotations="cert-manager.io/inject-ca-from: /injector-certificate"
``` | vault | layout docs page title Vault Agent Injector TLS with Cert Manager sidebar current docs platform k8s examples injector tls cert manager description Describes how to set up the Vault Agent Injector with certificates and keys generated by cert manager Vault agent injector TLS with Cert Manager The following instructions demonstrate how to configure the Vault Agent Injector to use certificates generated by cert manager https cert manager io This allows you to run multiple replicas of the Vault Agent Injector in a Kubernetes cluster Prerequisites Install cert manager if not already installed see the cert manager documentation https cert manager io docs installation For example with helm shell helm repo add jetstack https charts jetstack io helm repo update helm install cert manager jetstack cert manager namespace cert manager create namespace set installCRDs true Create a certificate authority CA For this example we will bootstrap a self signed certificate authority CA Issuer https cert manager io docs configuration If you already have a ClusterIssuer https cert manager io docs concepts issuer configured for your cluster you may skip this step yaml apiVersion cert manager io v1 kind Issuer metadata name selfsigned spec selfSigned apiVersion cert manager io v1 kind Certificate metadata name injector selfsigned ca spec isCA true commonName Agent Inject CA secretName injector ca secret duration 87660h 10 years privateKey algorithm ECDSA size 256 issuerRef name selfsigned kind Issuer group cert manager io apiVersion cert manager io v1 kind Issuer metadata name injector ca issuer spec ca secretName injector ca secret Save that to a file named ca issuer yaml and apply to your Kubernetes cluster console kubectl apply n vault f ca issuer yaml issuer cert manager io selfsigned created certificate cert manager io injector selfsigned ca created issuer cert manager io injector ca issuer created kubectl n vault get issuers o wide NAME READY STATUS AGE injector ca issuer True Signing CA verified 7s selfsigned True 7s kubectl n vault get certificates injector selfsigned ca o wide NAME READY SECRET ISSUER STATUS AGE injector selfsigned ca True injector ca secret selfsigned Certificate is up to date and has not expired 32s Create the Vault agent injector certificate Next we can create a request for cert manager to generate a certificate and key signed by the certificate authority above This certificate and key will be used by the Vault Agent Injector for TLS communications with the Kubernetes API The Certificate request object references the CA issuer created above and specifies the name of the Secret where the CA Certificate and Key will be stored by cert manager yaml apiVersion cert manager io v1 kind Certificate metadata name injector certificate spec secretName injector tls duration 24h renewBefore 144m roughly 10 of 24h dnsNames vault agent injector svc vault agent injector svc vault vault agent injector svc vault svc issuerRef name injector ca issuer commonName Agent Inject Cert Important Note The dnsNames for the certificate must be configured to use the name of the Vault Agent Injector Kubernetes service and namespace where it is deployed In this example the Vault Agent Injector service name is vault agent injector svc in the vault namespace This uses the pattern k8s service name k8s namespace svc Save the Certificate yaml to a file and apply to your cluster shell kubectl n vault apply f injector certificate yaml certificate cert manager io injector certificate created kubectl n vault get certificates injector certificate o wide NAME READY SECRET ISSUER STATUS AGE injector certificate True injector tls injector ca issuer Certificate is up to date and has not expired 41s kubectl n vault get secret injector tls NAME TYPE DATA AGE injector tls kubernetes io tls 3 6m59s Configuration Now that a certificate authority and a signed certificate have been created we can now configure Helm and the Vault Agent Injector to use them Install the Vault Agent Injector with the following custom values shell helm install vault hashicorp vault namespace vault set injector replicas 2 set injector leaderElector enabled false set injector certs secretName injector tls set injector webhook annotations cert manager io inject ca from injector certificate |
vault page title Vault Agent Injector TLS Configuration Describes how to set up the Vault Agent Injector with manually generated certificates and keys layout docs sidebar current docs platform k8s examples injector tls Vault agent injector TLS configuration | ---
layout: 'docs'
page_title: 'Vault Agent Injector TLS Configuration'
sidebar_current: 'docs-platform-k8s-examples-injector-tls'
description: |-
Describes how to set up the Vault Agent Injector with manually generated certificates and keys.
---
# Vault agent injector TLS configuration
@include 'helm/version.mdx'
The following instructions demonstrate how to manually configure the Vault Agent Injector
with self-signed certificates.
## Create a certificate authority (CA)
First, create a private key to be used by our custom Certificate Authority (CA):
```shell
$ openssl genrsa -out injector-ca.key 2048
```
Next, create a certificate authority certificate:
~> **Important Note:** Values such as days (how long the certificate is valid for) should be configured for your environment.
```shell
$ openssl req \
-x509 \
-new \
-nodes \
-key injector-ca.key \
-sha256 \
-days 1825 \
-out injector-ca.crt \
-subj "/C=US/ST=CA/L=San Francisco/O=HashiCorp/CN=vault-agent-injector-svc"
```
## Create Vault agent injector certificate
Next we can create a certificate and key signed by the certificate authority generated above. This
certificate and key will be used by the Vault Agent Injector for TLS communications with the Kubernetes
API.
First, create a private key for the certificate:
```shell
$ openssl genrsa -out tls.key 2048
```
Next, create a certificate signing request (CSR) to be used when signing the certificate:
```shell
$ openssl req \
-new \
-key tls.key \
-out tls.csr \
-subj "/C=US/ST=CA/L=San Francisco/O=HashiCorp/CN=vault-agent-injector-svc"
```
After creating the CSR, create an extension file to configure additional parameters for signing
the certificate.
~> **Important Note:** The alternative names for the certificate must be configured to use the name
of the Vault Agent Injector Kubernetes service and namespace where its created.
In this example the Vault Agent Injector service name is `vault-agent-injector-svc` in the `vault` namespace.
This uses the pattern `<k8s service name>.<k8s namespace>.svc.cluster.local`.
```shell
$ cat <<EOF >csr.conf
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = vault-agent-injector-svc
DNS.2 = vault-agent-injector-svc.vault
DNS.3 = vault-agent-injector-svc.vault.svc
DNS.4 = vault-agent-injector-svc.vault.svc.cluster.local
EOF
```
Finally, sign the certificate:
~> **Important Note:** Values such as days (how long the certificate is valid for) should be configured for your environment.
```shell
$ openssl x509 \
-req \
-in tls.csr \
-CA injector-ca.crt \
-CAkey injector-ca.key \
-CAcreateserial \
-out tls.crt \
-days 1825 \
-sha256 \
-extfile csr.conf
```
## Configuration
Now that a certificate authority and a signed certificate have been created, we can now configure
Helm and the Vault Agent Injector to use them.
First, create a Kubernetes secret containing the certificate and key created above:
~> **Important Note:** This example assumes the Vault Agent Injector is running in the `vault` namespace.
```shell
$ kubectl create secret generic injector-tls \
--from-file tls.crt \
--from-file tls.key \
--namespace=vault
```
Next, base64 encode the certificate authority so Kubernetes can verify the authenticity of the certificate:
```shell
$ export CA_BUNDLE=$(cat injector-ca.crt | base64)
```
Finally, install the Vault Agent Injector with the following custom values:
```shell
$ helm install vault hashicorp/vault \
--namespace=vault \
--set="injector.certs.secretName=injector-tls" \
--set="injector.certs.caBundle=${CA_BUNDLE?}"
``` | vault | layout docs page title Vault Agent Injector TLS Configuration sidebar current docs platform k8s examples injector tls description Describes how to set up the Vault Agent Injector with manually generated certificates and keys Vault agent injector TLS configuration include helm version mdx The following instructions demonstrate how to manually configure the Vault Agent Injector with self signed certificates Create a certificate authority CA First create a private key to be used by our custom Certificate Authority CA shell openssl genrsa out injector ca key 2048 Next create a certificate authority certificate Important Note Values such as days how long the certificate is valid for should be configured for your environment shell openssl req x509 new nodes key injector ca key sha256 days 1825 out injector ca crt subj C US ST CA L San Francisco O HashiCorp CN vault agent injector svc Create Vault agent injector certificate Next we can create a certificate and key signed by the certificate authority generated above This certificate and key will be used by the Vault Agent Injector for TLS communications with the Kubernetes API First create a private key for the certificate shell openssl genrsa out tls key 2048 Next create a certificate signing request CSR to be used when signing the certificate shell openssl req new key tls key out tls csr subj C US ST CA L San Francisco O HashiCorp CN vault agent injector svc After creating the CSR create an extension file to configure additional parameters for signing the certificate Important Note The alternative names for the certificate must be configured to use the name of the Vault Agent Injector Kubernetes service and namespace where its created In this example the Vault Agent Injector service name is vault agent injector svc in the vault namespace This uses the pattern k8s service name k8s namespace svc cluster local shell cat EOF csr conf authorityKeyIdentifier keyid issuer basicConstraints CA FALSE keyUsage digitalSignature nonRepudiation keyEncipherment dataEncipherment subjectAltName alt names alt names DNS 1 vault agent injector svc DNS 2 vault agent injector svc vault DNS 3 vault agent injector svc vault svc DNS 4 vault agent injector svc vault svc cluster local EOF Finally sign the certificate Important Note Values such as days how long the certificate is valid for should be configured for your environment shell openssl x509 req in tls csr CA injector ca crt CAkey injector ca key CAcreateserial out tls crt days 1825 sha256 extfile csr conf Configuration Now that a certificate authority and a signed certificate have been created we can now configure Helm and the Vault Agent Injector to use them First create a Kubernetes secret containing the certificate and key created above Important Note This example assumes the Vault Agent Injector is running in the vault namespace shell kubectl create secret generic injector tls from file tls crt from file tls key namespace vault Next base64 encode the certificate authority so Kubernetes can verify the authenticity of the certificate shell export CA BUNDLE cat injector ca crt base64 Finally install the Vault Agent Injector with the following custom values shell helm install vault hashicorp vault namespace vault set injector certs secretName injector tls set injector certs caBundle CA BUNDLE |
vault deployment models page title Vault Agent Sidecar Injector Examples The following are different configuration examples to support a variety of layout docs Vault agent injector examples This section documents examples of using the Vault Agent Injector | ---
layout: docs
page_title: Vault Agent Sidecar Injector Examples
description: This section documents examples of using the Vault Agent Injector.
---
# Vault agent injector examples
The following are different configuration examples to support a variety of
deployment models.
~> A common mistake is to set the annotation on the Deployment or other resource.
Ensure that the injector annotations are specified on the pod specification when
using higher level constructs such as deployments, jobs or statefulsets.
## Before using the Vault agent injector
Before applying Vault Agent injection annotations to pods, the following requirements
should be satisfied.
### Connectivity
- the Kubernetes API can connect to the Vault Agent injector service on port `443`, and
the injector can connect to the Kubernetes API,
- Vault can connect to the Kubernetes API,
- Pods in the Kubernetes cluster can connect to Vault.
~> Note: The Kubernetes API typically runs on the master nodes, and the Vault Agent injector
on a worker node in a Kubernetes cluster. <br/><br/>
On Kubernetes clusters that have aggregator routing enabled (ex. [GKE private
clusters](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules)),
the Kubernetes API will connect directly to the injector service endpoint,
which is on port `8080`.
### Kubernetes and Vault configuration
- Kubernetes auth method should be configured and enabled in Vault,
- Pod should have a service account,
- desired secrets exist within Vault,
- the service account should be bound to a Vault role with a policy enabling access to desired secrets.
For more information on configuring the Vault Kubernetes auth method,
[see the official documentation](/vault/docs/auth/kubernetes#configuration).
## Debugging
If an error occurs with a mutation request, Kubernetes will attach the error to the
owner of the pod. Check the following for errors:
- If the pod was created by a deployment or statefulset, check for errors in the `replicaset`
that owns the pod.
- If the pod was created by a job, check the `job` for errors.
## Patching existing pods
To patch existing pods, a Kubernetes patch can be applied to add the required annotations
to pods. When applying a patch, the pods will be rescheduled.
First, create the patch:
```bash
cat <<EOF >> ./patch.yaml
spec:
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-status: "update"
vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/db-app"
vault.hashicorp.com/agent-inject-template-db-creds: |
postgres://:@postgres:5432/appdb?sslmode=disable
vault.hashicorp.com/role: "db-app"
vault.hashicorp.com/ca-cert: "/vault/tls/ca.crt"
vault.hashicorp.com/client-cert: "/vault/tls/client.crt"
vault.hashicorp.com/client-key: "/vault/tls/client.key"
vault.hashicorp.com/tls-secret: "vault-tls-client"
EOF
```
Next, apply the patch:
```bash
kubectl patch deployment <MY DEPLOYMENT> --patch "$(cat patch.yaml)"
```
The pod should now be rescheduled with additional containers. The pod can be inspected
using the `kubectl describe` command:
```bash
kubectl describe pod <name of pod>
```
## Deployments, StatefulSets, etc.
The annotations for configuring Vault Agent injection must be on the pod
specification. Since higher level resources such as Deployments wrap pod
specification templates, Vault Agent Injector can be used with all of these
higher level constructs, too.
An example Deployment below shows how to enable Vault Agent injection:
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-example
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-example-deployment
spec:
replicas: 1
selector:
matchLabels:
app: app-example
template:
metadata:
labels:
app: app-example
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-inject-secret-db-creds: 'database/creds/db-app'
vault.hashicorp.com/agent-inject-template-db-creds: |
postgres://:@postgres:5432/appdb?sslmode=disable
vault.hashicorp.com/role: 'db-app'
vault.hashicorp.com/ca-cert: '/vault/tls/ca.crt'
vault.hashicorp.com/client-cert: '/vault/tls/client.crt'
vault.hashicorp.com/client-key: '/vault/tls/client.key'
vault.hashicorp.com/tls-secret: 'vault-tls-client'
spec:
containers:
- name: app
image: 'app:1.0.0'
serviceAccountName: app-example
```
## ConfigMap example
The following example creates a deployment that mounts a Kubernetes ConfigMap
containing Vault Agent configuration files. For a complete list of the Vault
Agent configuration settings, [see the Agent documentation](/vault/docs/agent-and-proxy/agent/template#vault-agent-templates).
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-example
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-example-deployment
spec:
replicas: 1
selector:
matchLabels:
app: app-example
template:
metadata:
labels:
app: app-example
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-configmap: 'my-configmap'
vault.hashicorp.com/tls-secret: 'vault-tls-client'
spec:
containers:
- name: app
image: 'app:1.0.0'
serviceAccountName: app-example
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
config.hcl: |
"auto_auth" = {
"method" = {
"config" = {
"role" = "db-app"
}
"type" = "kubernetes"
}
"sink" = {
"config" = {
"path" = "/home/vault/.token"
}
"type" = "file"
}
}
"exit_after_auth" = false
"pid_file" = "/home/vault/.pid"
"template" = {
"contents" = "postgres://:@postgres:5432/mydb?sslmode=disable"
"destination" = "/vault/secrets/db-creds"
}
"vault" = {
"address" = "https://vault.demo.svc.cluster.local:8200"
"ca_cert" = "/vault/tls/ca.crt"
"client_cert" = "/vault/tls/client.crt"
"client_key" = "/vault/tls/client.key"
}
config-init.hcl: |
"auto_auth" = {
"method" = {
"config" = {
"role" = "db-app"
}
"type" = "kubernetes"
}
"sink" = {
"config" = {
"path" = "/home/vault/.token"
}
"type" = "file"
}
}
"exit_after_auth" = true
"pid_file" = "/home/vault/.pid"
"template" = {
"contents" = "postgres://:@postgres:5432/mydb?sslmode=disable"
"destination" = "/vault/secrets/db-creds"
}
"vault" = {
"address" = "https://vault.demo.svc.cluster.local:8200"
"ca_cert" = "/vault/tls/ca.crt"
"client_cert" = "/vault/tls/client.crt"
"client_key" = "/vault/tls/client.key"
}
```
## Environment variable example
The following example demonstrates how templates can be used to create environment
variables. A template should be created that exports a Vault secret as an environment
variable and the application container should source those files during startup.
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: 'web'
vault.hashicorp.com/agent-inject-secret-config: 'secret/data/web'
# Environment variable export template
vault.hashicorp.com/agent-inject-template-config: |
export api_key=""
spec:
serviceAccountName: web
containers:
- name: web
image: alpine:latest
command:
['sh', '-c']
args:
['source /vault/secrets/config && <entrypoint script>']
ports:
- containerPort: 9090
```
## AppRole authentication
The following example demonstrates how the AppRole authentication method can be used by
Vault Agent for retrieving secrets. A Kubernetes secret containing the AppRole secret ID
and role ID should be created first.
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-extra-secret: 'approle-example'
vault.hashicorp.com/auth-type: 'approle'
vault.hashicorp.com/auth-path: 'auth/approle'
vault.hashicorp.com/auth-config-role-id-file-path: '/vault/custom/role-id'
vault.hashicorp.com/auth-config-secret-id-file-path: '/vault/custom/secret-id'
vault.hashicorp.com/agent-inject-secret-db-creds: 'database/creds/db-app'
vault.hashicorp.com/agent-inject-template-db-creds: |
postgres://:@postgres.postgres.svc:5432/wizard?sslmode=disable
vault.hashicorp.com/role: 'my-role'
vault.hashicorp.com/tls-secret: 'vault-tls'
vault.hashicorp.com/ca-cert: '/vault/tls/ca.crt'
spec:
serviceAccountName: web
containers:
- name: web
image: alpine:latest
args:
['sh', '-c', 'source /vault/secrets/config && <entrypoint script>']
ports:
- containerPort: 9090
```
## PKI cert example
The following example demonstrates how to use the [`pkiCert` function][pkiCert] and
[`writeToFile` function][writeToFile] from consul-template to create two files
from a template: one for the certificate and CA (`cert.pem`) and one for the key
(`cert.key`) generated by [Vault's PKI Secrets Engine](/vault/docs/secrets/pki).
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: 'web'
vault.hashicorp.com/agent-inject-secret-certs: 'pki/issue/cert'
vault.hashicorp.com/agent-inject-template-certs: |
spec:
serviceAccountName: web
containers:
- name: web
image: nginx
```
[pkiCert]: https://github.com/hashicorp/consul-template/blob/main/docs/templating-language.md#pkicert
[writeToFile]: https://github.com/hashicorp/consul-template/blob/main/docs/templating-language.md#writeToFile
## Cross namespace secret sharing ((#cross-namespace))
1. [Configure Vault for secret sharing across namespaces][cross-namespace].
1. Use the following Pod annotations to authenticate to the Kubernetes method in
the `us-west-org` namespace and render secrets from the `us-east-org`
namespace into the file `/vault/secrets/marketing`
```yaml
---
apiVersion: v1
kind: Pod
metadata:
name: cross-namespace
namespace: client-nicecorp
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "cross-namespace-demo"
vault.hashicorp.com/auth-path: "us-west-org/auth/kubernetes"
vault.hashicorp.com/agent-inject-template-marketing: |
:
spec:
serviceAccountName: mega-app
containers:
- name: campaign
image: nginx
```
[cross-namespace]: https://support.hashicorp.com/hc/en-us/articles/27093291534995-How-to-configure-cross-namespace-access-in-Vault-Enterprise | vault | layout docs page title Vault Agent Sidecar Injector Examples description This section documents examples of using the Vault Agent Injector Vault agent injector examples The following are different configuration examples to support a variety of deployment models A common mistake is to set the annotation on the Deployment or other resource Ensure that the injector annotations are specified on the pod specification when using higher level constructs such as deployments jobs or statefulsets Before using the Vault agent injector Before applying Vault Agent injection annotations to pods the following requirements should be satisfied Connectivity the Kubernetes API can connect to the Vault Agent injector service on port 443 and the injector can connect to the Kubernetes API Vault can connect to the Kubernetes API Pods in the Kubernetes cluster can connect to Vault Note The Kubernetes API typically runs on the master nodes and the Vault Agent injector on a worker node in a Kubernetes cluster br br On Kubernetes clusters that have aggregator routing enabled ex GKE private clusters https cloud google com kubernetes engine docs how to private clusters add firewall rules the Kubernetes API will connect directly to the injector service endpoint which is on port 8080 Kubernetes and Vault configuration Kubernetes auth method should be configured and enabled in Vault Pod should have a service account desired secrets exist within Vault the service account should be bound to a Vault role with a policy enabling access to desired secrets For more information on configuring the Vault Kubernetes auth method see the official documentation vault docs auth kubernetes configuration Debugging If an error occurs with a mutation request Kubernetes will attach the error to the owner of the pod Check the following for errors If the pod was created by a deployment or statefulset check for errors in the replicaset that owns the pod If the pod was created by a job check the job for errors Patching existing pods To patch existing pods a Kubernetes patch can be applied to add the required annotations to pods When applying a patch the pods will be rescheduled First create the patch bash cat EOF patch yaml spec template metadata annotations vault hashicorp com agent inject true vault hashicorp com agent inject status update vault hashicorp com agent inject secret db creds database creds db app vault hashicorp com agent inject template db creds postgres postgres 5432 appdb sslmode disable vault hashicorp com role db app vault hashicorp com ca cert vault tls ca crt vault hashicorp com client cert vault tls client crt vault hashicorp com client key vault tls client key vault hashicorp com tls secret vault tls client EOF Next apply the patch bash kubectl patch deployment MY DEPLOYMENT patch cat patch yaml The pod should now be rescheduled with additional containers The pod can be inspected using the kubectl describe command bash kubectl describe pod name of pod Deployments StatefulSets etc The annotations for configuring Vault Agent injection must be on the pod specification Since higher level resources such as Deployments wrap pod specification templates Vault Agent Injector can be used with all of these higher level constructs too An example Deployment below shows how to enable Vault Agent injection yaml apiVersion v1 kind ServiceAccount metadata name app example apiVersion apps v1 kind Deployment metadata name app example deployment spec replicas 1 selector matchLabels app app example template metadata labels app app example annotations vault hashicorp com agent inject true vault hashicorp com agent inject secret db creds database creds db app vault hashicorp com agent inject template db creds postgres postgres 5432 appdb sslmode disable vault hashicorp com role db app vault hashicorp com ca cert vault tls ca crt vault hashicorp com client cert vault tls client crt vault hashicorp com client key vault tls client key vault hashicorp com tls secret vault tls client spec containers name app image app 1 0 0 serviceAccountName app example ConfigMap example The following example creates a deployment that mounts a Kubernetes ConfigMap containing Vault Agent configuration files For a complete list of the Vault Agent configuration settings see the Agent documentation vault docs agent and proxy agent template vault agent templates yaml apiVersion v1 kind ServiceAccount metadata name app example apiVersion apps v1 kind Deployment metadata name app example deployment spec replicas 1 selector matchLabels app app example template metadata labels app app example annotations vault hashicorp com agent inject true vault hashicorp com agent configmap my configmap vault hashicorp com tls secret vault tls client spec containers name app image app 1 0 0 serviceAccountName app example apiVersion v1 kind ConfigMap metadata name my configmap data config hcl auto auth method config role db app type kubernetes sink config path home vault token type file exit after auth false pid file home vault pid template contents postgres postgres 5432 mydb sslmode disable destination vault secrets db creds vault address https vault demo svc cluster local 8200 ca cert vault tls ca crt client cert vault tls client crt client key vault tls client key config init hcl auto auth method config role db app type kubernetes sink config path home vault token type file exit after auth true pid file home vault pid template contents postgres postgres 5432 mydb sslmode disable destination vault secrets db creds vault address https vault demo svc cluster local 8200 ca cert vault tls ca crt client cert vault tls client crt client key vault tls client key Environment variable example The following example demonstrates how templates can be used to create environment variables A template should be created that exports a Vault secret as an environment variable and the application container should source those files during startup yaml apiVersion apps v1 kind Deployment metadata name web deployment labels app web spec replicas 1 selector matchLabels app web template metadata labels app web annotations vault hashicorp com agent inject true vault hashicorp com role web vault hashicorp com agent inject secret config secret data web Environment variable export template vault hashicorp com agent inject template config export api key spec serviceAccountName web containers name web image alpine latest command sh c args source vault secrets config entrypoint script ports containerPort 9090 AppRole authentication The following example demonstrates how the AppRole authentication method can be used by Vault Agent for retrieving secrets A Kubernetes secret containing the AppRole secret ID and role ID should be created first yaml apiVersion apps v1 kind Deployment metadata name web deployment labels app web spec replicas 1 selector matchLabels app web template metadata labels app web annotations vault hashicorp com agent inject true vault hashicorp com agent extra secret approle example vault hashicorp com auth type approle vault hashicorp com auth path auth approle vault hashicorp com auth config role id file path vault custom role id vault hashicorp com auth config secret id file path vault custom secret id vault hashicorp com agent inject secret db creds database creds db app vault hashicorp com agent inject template db creds postgres postgres postgres svc 5432 wizard sslmode disable vault hashicorp com role my role vault hashicorp com tls secret vault tls vault hashicorp com ca cert vault tls ca crt spec serviceAccountName web containers name web image alpine latest args sh c source vault secrets config entrypoint script ports containerPort 9090 PKI cert example The following example demonstrates how to use the pkiCert function pkiCert and writeToFile function writeToFile from consul template to create two files from a template one for the certificate and CA cert pem and one for the key cert key generated by Vault s PKI Secrets Engine vault docs secrets pki yaml apiVersion apps v1 kind Deployment metadata name web deployment labels app web spec replicas 1 selector matchLabels app web template metadata labels app web annotations vault hashicorp com agent inject true vault hashicorp com role web vault hashicorp com agent inject secret certs pki issue cert vault hashicorp com agent inject template certs spec serviceAccountName web containers name web image nginx pkiCert https github com hashicorp consul template blob main docs templating language md pkicert writeToFile https github com hashicorp consul template blob main docs templating language md writeToFile Cross namespace secret sharing cross namespace 1 Configure Vault for secret sharing across namespaces cross namespace 1 Use the following Pod annotations to authenticate to the Kubernetes method in the us west org namespace and render secrets from the us east org namespace into the file vault secrets marketing yaml apiVersion v1 kind Pod metadata name cross namespace namespace client nicecorp annotations vault hashicorp com agent inject true vault hashicorp com role cross namespace demo vault hashicorp com auth path us west org auth kubernetes vault hashicorp com agent inject template marketing spec serviceAccountName mega app containers name campaign image nginx cross namespace https support hashicorp com hc en us articles 27093291534995 How to configure cross namespace access in Vault Enterprise |
vault Vault Agent containers to pods for consuming Vault secrets layout docs page title Agent Sidecar Injector Overview Agent sidecar injector The Vault Agent Sidecar Injector is a Kubernetes admission webhook that adds | ---
layout: docs
page_title: Agent Sidecar Injector Overview
description: >-
The Vault Agent Sidecar Injector is a Kubernetes admission webhook that adds
Vault Agent containers to pods for consuming Vault secrets.
---
# Agent sidecar injector
The Vault Agent Injector alters pod specifications to include Vault Agent
containers that render Vault secrets to a shared memory volume using
[Vault Agent Templates](/vault/docs/agent-and-proxy/agent/template).
By rendering secrets to a shared volume, containers within the pod can consume
Vault secrets without being Vault aware.
The injector is a [Kubernetes Mutation Webhook Controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/).
The controller intercepts pod events and applies mutations to the pod if annotations exist within
the request. This functionality is provided by the [vault-k8s](https://github.com/hashicorp/vault-k8s)
project and can be automatically installed and configured using the
[Vault Helm](https://github.com/hashicorp/vault-helm) chart.
@include 'kubernetes-supported-versions.mdx'
## Overview
The Vault Agent Injector works by intercepting pod `CREATE` and `UPDATE`
events in Kubernetes. The controller parses the event and looks for the metadata
annotation `vault.hashicorp.com/agent-inject: true`. If found, the controller will
alter the pod specification based on other annotations present.
### Mutations
At a minimum, every container in the pod will be configured to mount a shared
memory volume. This volume is mounted to `/vault/secrets` and will be used by the Vault
Agent containers for sharing secrets with the other containers in the pod.
Next, two types of Vault Agent containers can be injected: init and sidecar. The
init container will prepopulate the shared memory volume with the requested
secrets prior to the other containers starting. The sidecar container will
continue to authenticate and render secrets to the same location as the pod runs.
Using annotations, the initialization and sidecar containers may be disabled.
Last, two additional types of volumes can be optionally mounted to the Vault Agent
containers. The first is secret volume containing TLS requirements such as client
and CA (certificate authority) certificates and keys. This volume is useful when
communicating and verifying the Vault server's authenticity using TLS. The second
is a configuration map containing Vault Agent configuration files. This volume is
useful to customize Vault Agent beyond what the provided annotations offer.
### Authenticating with Vault
The primary method of authentication with Vault when using the Vault Agent Injector
is the service account attached to the pod. Other authentication methods can be configured
using annotations.
For Kubernetes authentication, the service account must be bound to a Vault role and a
policy granting access to the secrets desired.
A service account must be present to use the Vault Agent Injector with the Kubernetes
authentication method. It is _not_ recommended to bind Vault roles to the default service
account provided to pods if no service account is defined.
### Requesting secrets
There are two methods of configuring the Vault Agent containers to render secrets:
- the `vault.hashicorp.com/agent-inject-secret` annotation, or
- a configuration map containing Vault Agent configuration files.
Only one of these methods may be used at any time.
#### Secrets via annotations
To configure secret injection using annotations, the user must supply:
- one or more _secret_ annotations, and
- the Vault role used to access those secrets.
The annotation must have the format:
```yaml
vault.hashicorp.com/agent-inject-secret-<unique-name>: /path/to/secret
```
The unique name will be the filename of the rendered secret and must be unique if
multiple secrets are defined by the user. For example, consider the following
secret annotations:
```yaml
vault.hashicorp.com/agent-inject-secret-foo: database/roles/app
vault.hashicorp.com/agent-inject-secret-bar: consul/creds/app
vault.hashicorp.com/role: 'app'
```
The first annotation will be rendered to `/vault/secrets/foo` and the second
annotation will be rendered to `/vault/secrets/bar`.
It's possible to set the file format of the rendered secret using the annotation. For example the
following secret will be rendered to `/vault/secrets/foo.txt`:
```yaml
vault.hashicorp.com/agent-inject-secret-foo.txt: database/roles/app
vault.hashicorp.com/role: 'app'
```
The secret unique name must consist of alphanumeric characters, `.`, `_` or `-`.
##### Secret templates
~> Vault Agent uses the Consul Template project to render secrets. For more information
on writing templates, see the [Consul Template documentation](https://github.com/hashicorp/consul-template).
How the secret is rendered to the file is also configurable. To configure the template
used, the user must supply a _template_ annotation using the same unique name of
the secret. The annotation must have the following format:
```yaml
vault.hashicorp.com/agent-inject-template-<unique-name>: |
<
TEMPLATE
HERE
>
```
For example, consider the following:
```yaml
vault.hashicorp.com/agent-inject-secret-foo: 'database/creds/db-app'
vault.hashicorp.com/agent-inject-template-foo: |
postgres://:@postgres:5432/mydb?sslmode=disable
vault.hashicorp.com/role: 'app'
```
The rendered secret would look like this within the container:
```shell-session
$ cat /vault/secrets/foo
postgres://v-kubernet-pg-app-q0Z7WPfVN:A1a-BUEuQR52oAqPrP1J@postgres:5432/mydb?sslmode=disable
```
~> The default left and right template delimiters are ``.
If no template is provided the following generic template is used:
```
:
```
For example, the following annotation will use the default template to render
PostgreSQL secrets found at the configured path:
```yaml
vault.hashicorp.com/agent-inject-secret-foo: 'database/roles/pg-app'
vault.hashicorp.com/role: 'app'
```
The rendered secret would look like this within the container:
```shell-session
$ cat /vault/secrets/foo
password: A1a-BUEuQR52oAqPrP1J
username: v-kubernet-pg-app-q0Z7WPfVNqqTJuoDqCTY-1576529094
```
~> Some secrets such as KV are stored in maps. Their data can be accessed using `.Data.data.<NAME>`
### Renewals and updating secrets
For more information on when Vault Agent fetches and renews secrets, see the
[Agent documentation](/vault/docs/agent-and-proxy/agent/template#renewals-and-updating-secrets).
### Vault agent configuration map
For advanced use cases, it may be required to define Vault Agent configuration
files to mount instead of using secret and template annotations. The Vault Agent
Injector supports mounting ConfigMaps by specifying the name using the `vault.hashicorp.com/agent-configmap`
annotation. The configuration files will be mounted to `/vault/configs`.
The configuration map must contain either one or both of the following files:
- **config-init.hcl** used by the init container. This must have `exit_after_auth` set to `true`.
- **config.hcl** used by the sidecar container. This must have `exit_after_auth` set to `false`.
An example of mounting a Vault Agent configmap [can be found here](/vault/docs/platform/k8s/injector/examples#configmap-example).
## Tutorial
Refer to the [Injecting Secrets into Kubernetes Pods via Vault Helm
Sidecar](/vault/tutorials/kubernetes/kubernetes-sidecar) guide
for a step-by-step tutorial. | vault | layout docs page title Agent Sidecar Injector Overview description The Vault Agent Sidecar Injector is a Kubernetes admission webhook that adds Vault Agent containers to pods for consuming Vault secrets Agent sidecar injector The Vault Agent Injector alters pod specifications to include Vault Agent containers that render Vault secrets to a shared memory volume using Vault Agent Templates vault docs agent and proxy agent template By rendering secrets to a shared volume containers within the pod can consume Vault secrets without being Vault aware The injector is a Kubernetes Mutation Webhook Controller https kubernetes io docs reference access authn authz admission controllers The controller intercepts pod events and applies mutations to the pod if annotations exist within the request This functionality is provided by the vault k8s https github com hashicorp vault k8s project and can be automatically installed and configured using the Vault Helm https github com hashicorp vault helm chart include kubernetes supported versions mdx Overview The Vault Agent Injector works by intercepting pod CREATE and UPDATE events in Kubernetes The controller parses the event and looks for the metadata annotation vault hashicorp com agent inject true If found the controller will alter the pod specification based on other annotations present Mutations At a minimum every container in the pod will be configured to mount a shared memory volume This volume is mounted to vault secrets and will be used by the Vault Agent containers for sharing secrets with the other containers in the pod Next two types of Vault Agent containers can be injected init and sidecar The init container will prepopulate the shared memory volume with the requested secrets prior to the other containers starting The sidecar container will continue to authenticate and render secrets to the same location as the pod runs Using annotations the initialization and sidecar containers may be disabled Last two additional types of volumes can be optionally mounted to the Vault Agent containers The first is secret volume containing TLS requirements such as client and CA certificate authority certificates and keys This volume is useful when communicating and verifying the Vault server s authenticity using TLS The second is a configuration map containing Vault Agent configuration files This volume is useful to customize Vault Agent beyond what the provided annotations offer Authenticating with Vault The primary method of authentication with Vault when using the Vault Agent Injector is the service account attached to the pod Other authentication methods can be configured using annotations For Kubernetes authentication the service account must be bound to a Vault role and a policy granting access to the secrets desired A service account must be present to use the Vault Agent Injector with the Kubernetes authentication method It is not recommended to bind Vault roles to the default service account provided to pods if no service account is defined Requesting secrets There are two methods of configuring the Vault Agent containers to render secrets the vault hashicorp com agent inject secret annotation or a configuration map containing Vault Agent configuration files Only one of these methods may be used at any time Secrets via annotations To configure secret injection using annotations the user must supply one or more secret annotations and the Vault role used to access those secrets The annotation must have the format yaml vault hashicorp com agent inject secret unique name path to secret The unique name will be the filename of the rendered secret and must be unique if multiple secrets are defined by the user For example consider the following secret annotations yaml vault hashicorp com agent inject secret foo database roles app vault hashicorp com agent inject secret bar consul creds app vault hashicorp com role app The first annotation will be rendered to vault secrets foo and the second annotation will be rendered to vault secrets bar It s possible to set the file format of the rendered secret using the annotation For example the following secret will be rendered to vault secrets foo txt yaml vault hashicorp com agent inject secret foo txt database roles app vault hashicorp com role app The secret unique name must consist of alphanumeric characters or Secret templates Vault Agent uses the Consul Template project to render secrets For more information on writing templates see the Consul Template documentation https github com hashicorp consul template How the secret is rendered to the file is also configurable To configure the template used the user must supply a template annotation using the same unique name of the secret The annotation must have the following format yaml vault hashicorp com agent inject template unique name TEMPLATE HERE For example consider the following yaml vault hashicorp com agent inject secret foo database creds db app vault hashicorp com agent inject template foo postgres postgres 5432 mydb sslmode disable vault hashicorp com role app The rendered secret would look like this within the container shell session cat vault secrets foo postgres v kubernet pg app q0Z7WPfVN A1a BUEuQR52oAqPrP1J postgres 5432 mydb sslmode disable The default left and right template delimiters are If no template is provided the following generic template is used For example the following annotation will use the default template to render PostgreSQL secrets found at the configured path yaml vault hashicorp com agent inject secret foo database roles pg app vault hashicorp com role app The rendered secret would look like this within the container shell session cat vault secrets foo password A1a BUEuQR52oAqPrP1J username v kubernet pg app q0Z7WPfVNqqTJuoDqCTY 1576529094 Some secrets such as KV are stored in maps Their data can be accessed using Data data NAME Renewals and updating secrets For more information on when Vault Agent fetches and renews secrets see the Agent documentation vault docs agent and proxy agent template renewals and updating secrets Vault agent configuration map For advanced use cases it may be required to define Vault Agent configuration files to mount instead of using secret and template annotations The Vault Agent Injector supports mounting ConfigMaps by specifying the name using the vault hashicorp com agent configmap annotation The configuration files will be mounted to vault configs The configuration map must contain either one or both of the following files config init hcl used by the init container This must have exit after auth set to true config hcl used by the sidecar container This must have exit after auth set to false An example of mounting a Vault Agent configmap can be found here vault docs platform k8s injector examples configmap example Tutorial Refer to the Injecting Secrets into Kubernetes Pods via Vault Helm Sidecar vault tutorials kubernetes kubernetes sidecar guide for a step by step tutorial |
vault Annotations page title Agent Sidecar Injector Annotations are organized into two sections agent and vault All of the annotations below layout docs This section documents the configurable annotations for the Vault Agent Injector The following are the available annotations for the injector These annotations | ---
layout: docs
page_title: Agent Sidecar Injector Annotations
description: This section documents the configurable annotations for the Vault Agent Injector.
---
# Annotations
The following are the available annotations for the injector. These annotations
are organized into two sections: agent and vault. All of the annotations below
change the configurations of the Vault Agent containers injected into the pod.
## Agent annotations
Agent annotations change the Vault Agent containers templating configuration. For
example, agent annotations allow users to define what secrets they want, how to render
them, optional commands to run, etc.
- `vault.hashicorp.com/agent-inject` - configures whether injection is explicitly
enabled or disabled for a pod. This should be set to a `true` or `false` value.
Defaults to `false`.
- `vault.hashicorp.com/agent-inject-status` - blocks further mutations
by adding the value `injected` to the pod after a successful mutation.
- `vault.hashicorp.com/agent-configmap` - name of the configuration map where Vault
Agent configuration file and templates can be found.
- `vault.hashicorp.com/agent-image` - name of the Vault docker image to use. This
value overrides the default image configured in the injector and is usually
not needed. Defaults to `hashicorp/vault:1.18.1`.
- `vault.hashicorp.com/agent-init-first` - configures the pod to run the Vault Agent
init container first if `true` (last if `false`). This is useful when other init
containers need pre-populated secrets. This should be set to a `true` or `false`
value. Defaults to `false`.
- `vault.hashicorp.com/agent-inject-command` - configures Vault Agent
to run a command after the template has been rendered. To map a command to a specific
secret, use the same unique secret name: `vault.hashicorp.com/agent-inject-command-SECRET-NAME`.
For example, if a secret annotation `vault.hashicorp.com/agent-inject-secret-foobar`
is configured, `vault.hashicorp.com/agent-inject-command-foobar` would map a command
to that secret.
- `vault.hashicorp.com/agent-inject-secret` - configures Vault Agent
to retrieve the secrets from Vault required by the container. The name of the
secret is any unique string after `vault.hashicorp.com/agent-inject-secret-`,
such as `vault.hashicorp.com/agent-inject-secret-foobar`. The value is the path
in Vault where the secret is located.
- `vault.hashicorp.com/agent-inject-template` - configures the template Vault Agent
should use for rendering a secret. The name of the template is any
unique string after `vault.hashicorp.com/agent-inject-template-`, such as
`vault.hashicorp.com/agent-inject-template-foobar`. This should map to the same
unique value provided in `vault.hashicorp.com/agent-inject-secret-`. If not provided,
a default generic template is used.
- `vault.hashicorp.com/agent-template-left-delim` - configures the left delimiter for Vault Agent to
use when rendering a secret template. The name of the template is any unique string after
`vault.hashicorp.com/agent-template-left-delim-`, such as
`vault.hashicorp.com/agent-template-left-delim-foobar`. This should map to the same unique value
provided in `vault.hashicorp.com/agent-inject-template-`. If not provided, a default left
delimiter is used as defined by [Vault Agent Template Config](/vault/docs/agent-and-proxy/agent/template#left_delimiter).
- `vault.hashicorp.com/agent-template-right-delim` - configures the right delimiter for Vault Agent
to use when rendering a secret template. The name of the template is any unique string after
`vault.hashicorp.com/agent-template-right-delim-`, such as
`vault.hashicorp.com/agent-template-right-delim-foobar`. This should map to the same unique value
provided in `vault.hashicorp.com/agent-inject-template-`. If not provided, a default right
delimiter is used as defined by [Vault Agent Template Config](/vault/docs/agent-and-proxy/agent/template#right_delimiter).
- `vault.hashicorp.com/error-on-missing-key` - configures whether Vault Agent
should exit with an error when accessing a struct or map field/key that does
not exist. The name of the secret is the string after
`vault.hashicorp.com/error-on-missing-key-`, and should map to the same unique
value provided in `vault.hashicorp.com/agent-inject-secret-`. Defaults to
`false`. See [Vault Agent Template Config](/vault/docs/agent-and-proxy/agent/template#template-configurations)
for more details.
- `vault.hashicorp.com/agent-inject-containers` - comma-separated list that specifies in
which containers the secrets volume should be mounted. If not provided, the secrets
volume will be mounted in all containers in the pod.
- `vault.hashicorp.com/secret-volume-path` - configures where on the filesystem a secret
will be rendered. To map a path to a specific secret, use the same unique secret name:
`vault.hashicorp.com/secret-volume-path-SECRET-NAME`. For example, if a secret annotation
`vault.hashicorp.com/agent-inject-secret-foobar` is configured,
`vault.hashicorp.com/secret-volume-path-foobar` would configure where that secret
is rendered. If no secret name is provided, this sets the default for all rendered
secrets in the pod.
- `vault.hashicorp.com/agent-inject-file` - configures the filename and path
in the secrets volume where a Vault secret will be written. This should be used
with `vault.hashicorp.com/secret-volume-path`, which mounts a memory volume to
the specified path. If `secret-volume-path` is used, the path can be omitted from
this value. To map a filename to a specific secret, use the same unique secret name:
`vault.hashicorp.com/agent-inject-file-SECRET-NAME`. For example, if a secret annotation
`vault.hashicorp.com/agent-inject-secret-foobar` is configured,
`vault.hashicorp.com/agent-inject-file-foobar` would configure the filename.
- `vault.hashicorp.com/agent-inject-perms` - configures the permissions of the
file to create in the secrets volume. The name of the secret is the string
after "vault.hashicorp.com/agent-inject-perms-", and should map to the same
unique value provided in "vault.hashicorp.com/agent-inject-secret-". The value
is the octal permission, for example: `0644`.
- `vault.hashicorp.com/agent-inject-template-file` - configures the path and filename of the
custom template to use. This should be used with `vault.hashicorp.com/extra-secret`,
which mounts a Kubernetes secret to `/vault/custom`. To map a template file to a specific secret,
use the same unique secret name: `vault.hashicorp.com/agent-inject-template-file-SECRET-NAME`.
For example, if a secret annotation `vault.hashicorp.com/agent-inject-secret-foobar` is configured,
`vault.hashicorp.com/agent-inject-template-file-foobar` would configure the template file.
- `vault.hashicorp.com/agent-inject-default-template` - configures the default template type for rendering
secrets if no custom template is defined. Possible values include `map` and `json`. Defaults to `map`.
- `vault.hashicorp.com/template-config-exit-on-retry-failure` - controls whether
Vault Agent exits after it has exhausted its number of template retry attempts
due to failures. Defaults to `true`. See [Vault Agent Template
Config](/vault/docs/agent-and-proxy/agent/template#global-configurations) for more details.
- `vault.hashicorp.com/template-static-secret-render-interval` - If specified,
configures how often Vault Agent Template should render non-leased secrets such as KV v2.
See [Vault Agent Template Config](/vault/docs/agent-and-proxy/agent/template#global-configurations) for more details.
- `vault.hashicorp.com/template-max-connections-per-host` - If specified, limits
the total number of connections that the Vault Agent templating engine can use
for a particular Vault host. The connection limit includes all connections in the dialing,
active, and idle states. See [Vault Agent Template Config](/vault/docs/agent-and-proxy/agent/template#global-configurations)
for more details.
- `vault.hashicorp.com/agent-extra-secret` - mounts Kubernetes secret as a volume at
`/vault/custom` in the sidecar/init containers. Useful for custom Agent configs with
auto-auth methods such as approle that require paths to secrets be present.
- `vault.hashicorp.com/agent-inject-token` - configures Vault Agent to share the Vault
token with other containers in the pod, in a file named `token` in the root of the
secrets volume (i.e. `/vault/secrets/token`). This is helpful when other containers
communicate directly with Vault but require auto-authentication provided by Vault
Agent. This should be set to a `true` or `false` value. Defaults to `false`.
- `vault.hashicorp.com/agent-limits-cpu` - configures the CPU limits on the Vault
Agent containers. Defaults to `500m`. Setting this to an empty string disables
CPU limits.
- `vault.hashicorp.com/agent-limits-mem` - configures the memory limits on the Vault
Agent containers. Defaults to `128Mi`. Setting this to an empty string disables
memory limits.
- `vault.hashicorp.com/agent-limits-ephemeral` - configures the ephemeral
storage limit on the Vault Agent containers. Defaults to unset, which
disables ephemeral storage limits. Also available as a command-line option
(`-ephemeral-storage-limit`) or environment variable (`AGENT_INJECT_EPHEMERAL_LIMIT`)
to set the default for all injected Agent containers. **Note:** Pod limits are
equal to the sum of all container limits. Setting this limit without setting it
for other containers will also affect the limits of other containers in the pod.
See [Kubernetes resources documentation][k8s-resources] for more details.
- `vault.hashicorp.com/agent-requests-cpu` - configures the CPU requests on the
Vault Agent containers. Defaults to `250m`. Setting this to an empty string disables
CPU requests.
- `vault.hashicorp.com/agent-requests-mem` - configures the memory requests on the
Vault Agent containers. Defaults to `64Mi`. Setting this to an empty string disables
memory requests.
- `vault.hashicorp.com/agent-requests-ephemeral` - configures the ephemeral
storage requests on the Vault Agent Containers. Defaults to unset, which
disables ephemeral storage requests (and will default to the ephemeral limit
if set). Also available as a command-line option (`-ephemeral-storage-request`)
or environment variable (`AGENT_INJECT_EPHEMERAL_REQUEST`) to set the default
for all injected Agent containers. **Note:** Pod requests are equal to the sum
of all container requests. Setting this limit without setting it for other
containers will also affect the requests of other containers in the pod. See
[Kubernetes resources documentation][k8s-resources] for more details.
- `vault.hashicorp.com/agent-revoke-on-shutdown` - configures whether the sidecar
will revoke it's own token before shutting down. This setting will only be applied
to the Vault Agent sidecar container. This should be set to a `true` or `false`
value. Defaults to `false`.
- `vault.hashicorp.com/agent-revoke-grace` - configures the grace period, in seconds,
for revoking it's own token before shutting down. This setting will only be applied
to the Vault Agent sidecar container. Defaults to `5s`.
- `vault.hashicorp.com/agent-pre-populate` - configures whether an init container
is included to pre-populate the shared memory volume with secrets prior to the
containers starting. This should be set to a `true` or `false` value. Defaults
to `true`.
- `vault.hashicorp.com/agent-pre-populate-only` - configures whether an init container
is the only injected container. If true, no sidecar container will be injected
at runtime of the pod. Enabling this option is recommended for workloads of
type `CronJob` or `Job` to ensure a clean pod termination.
- `vault.hashicorp.com/preserve-secret-case` - configures Vault Agent to preserve
the secret name case when creating the secret files. This should be set to a `true`
or `false` value. Defaults to `false`.
- `vault.hashicorp.com/agent-run-as-user` - sets the user (uid) to run Vault
agent as. Also available as a command-line option (`-run-as-user`) or
environment variable (`AGENT_INJECT_RUN_AS_USER`) for the injector. Defaults
to 100.
- `vault.hashicorp.com/agent-run-as-group` - sets the group (gid) to run Vault
agent as. Also available as a command-line option (`-run-as-group`) or
environment variable (`AGENT_INJECT_RUN_AS_GROUP`) for the injector. Defaults
to 1000.
- `vault.hashicorp.com/agent-set-security-context` - controls whether
`SecurityContext` is set in injected containers. Also available as a
command-line option (`-set-security-context`) or environment variable
(`AGENT_INJECT_SET_SECURITY_CONTEXT`). Defaults to `true`.
- `vault.hashicorp.com/agent-run-as-same-user` - run the injected Vault agent
containers as the User (uid) of the first application container in the pod.
Requires `Spec.Containers[0].SecurityContext.RunAsUser` to be set in the pod
spec. Also available as a command-line option (`-run-as-same-user`) or
environment variable (`AGENT_INJECT_RUN_AS_SAME_USER`). Defaults to `false`.
~> **Note**: If the first application container in the pod is running as root
(uid 0), the `run-as-same-user` annotation will fail injection with an error.
- `vault.hashicorp.com/agent-share-process-namespace` - sets
[shareProcessNamespace] in the Pod spec where Vault Agent is injected.
Defaults to `false`.
- `vault.hashicorp.com/agent-cache-enable` - configures Vault Agent to enable
[caching](/vault/docs/agent-and-proxy/agent/caching). In Vault 1.7+ this annotation will also enable
a Vault Agent persistent cache. This persistent cache will be shared between the init
and sidecar container to reuse tokens and leases retrieved by the init container.
Defaults to `false`.
- `vault.hashicorp.com/agent-cache-use-auto-auth-token` - configures Vault Agent cache
to authenticate on behalf of the requester. Set to `force` to enable. Disabled
by default.
- `vault.hashicorp.com/agent-cache-listener-port` - configures Vault Agent cache
listening port. Defaults to `8200`.
- `vault.hashicorp.com/agent-copy-volume-mounts` - copies the mounts from the specified
container and mounts them to the Vault Agent containers. The service account volume is
ignored.
- `vault.hashicorp.com/agent-service-account-token-volume-name` - the optional name of a projected volume containing a service account token for use with auto-auth against Vault's Kubernetes auth method. If the volume is mounted to another container in the deployment, the token volume will be mounted to the same location in the vault-agent containers. Otherwise it will be mounted at the default location of `/var/run/secrets/vault.hashicorp.com/serviceaccount/`.
- `vault.hashicorp.com/agent-enable-quit` - enable the [`/agent/v1/quit` endpoint](/vault/docs/agent-and-proxy/agent#quit) on an injected agent. This option defaults to false, and if true will be set on the existing cache listener, or a new localhost listener with a basic cache stanza configured. The [agent-cache-listener-port annotation](/vault/docs/platform/k8s/injector/annotations#vault-hashicorp-com-agent-cache-listener-port) can be used to change the port.
- `vault.hashicorp.com/agent-telemetry` - specifies the [telemetry](/vault/docs/configuration/telemetry) configuration for the
Vault Agent sidecar. The name of the config is any unique string after
`vault.hashicorp.com/agent-telemetry-`, such as `vault.hashicorp.com/agent-telemetry-prometheus_retention_time`.
This annotation can be reused multiple times to configure multiple settings for the agent telemetry.
- `vault.hashicorp.com/go-max-procs` - set the `GOMAXPROCS` environment variable for injected agents
- `vault.hashicorp.com/agent-json-patch` - change the injected agent sidecar container using a [JSON patch](https://jsonpatch.com/) before it is created.
This can be used to add, remove, or modify any attribute of the container.
For example, setting this to `[{"op": "replace", "path": "/name", "value": "different-name"}]` will update the agent container's name to be `different-name`
instead of the default `vault-agent`.
- `vault.hashicorp.com/agent-init-json-patch` - same as `vault.hashicorp.com/agent-json-patch`, except that the JSON patch will be applied to the
injected init container instead.
## Vault annotations
Vault annotations change how the Vault Agent containers communicate with Vault. For
example, Vault's address, TLS certificates to use, client parameters such as timeouts,
etc.
- `vault.hashicorp.com/auth-config` - configures additional parameters for the configured
authentication method. The name of the config is any unique string after
`vault.hashicorp.com/auth-config-`, such as `vault.hashicorp.com/auth-config-role-id-file-path`.
This annotation can be reused multiple times to configure multiple settings for the authentication
method. Some authentication methods may require additional secrets and should be mounted via the
`vault.hashicorp.com/agent-extra-secret` annotation. For a list of valid authentication configurations,
see the Vault Agent [auto-auth documentation](/vault/docs/agent-and-proxy/autoauth/methods).
- `vault.hashicorp.com/auth-path` - configures the authentication path for the Kubernetes
auth method. Defaults to `auth/kubernetes`.
- `vault.hashicorp.com/auth-type` - configures the authentication type for Vault Agent.
Defaults to `kubernetes`. For a list of valid authentication methods, see the Vault Agent
[auto-auth documentation](/vault/docs/agent-and-proxy/autoauth/methods).
- `vault.hashicorp.com/auth-min-backoff` - set the [min_backoff](/vault/docs/agent-and-proxy/autoauth#min_backoff) option in the auto-auth config. Requires Vault 1.11+.
- `vault.hashicorp.com/auth-max-backoff` - set the [max_backoff](/vault/docs/agent-and-proxy/autoauth#max_backoff) option in the auto-auth config
- `vault.hashicorp.com/agent-auto-auth-exit-on-err` - set the [exit_on_err](/vault/docs/agent-and-proxy/autoauth#exit_on_err) option in the auto-auth config
- `vault.hashicorp.com/ca-cert` - path of the CA certificate used to verify Vault's
TLS. This can also be set as the default for all injected Agents via the
`AGENT_INJECT_VAULT_CACERT_BYTES` environment variable which takes a PEM-encoded
certificate or bundle.
- `vault.hashicorp.com/ca-key` - path of the CA public key used to verify Vault's
TLS.
- `vault.hashicorp.com/client-cert` - path of the client certificate used when
communicating with Vault via mTLS.
- `vault.hashicorp.com/client-key` - path of the client public key used when communicating
with Vault via mTLS.
- `vault.hashicorp.com/client-max-retries` - configures number of Vault Agent retry
attempts when certain errors are encountered. Defaults to 2, for 3 total attempts.
Set this to `0` or less to disable retrying. Error codes that are retried are 412
(client consistency requirement not satisfied) and all 5xx except for 501 (not implemented).
- `vault.hashicorp.com/client-timeout` - configures the request timeout threshold,
in seconds, of the Vault Agent when communicating with Vault. Defaults to `60s`
and accepts value types of `60`, `60s` or `1m`.
- `vault.hashicorp.com/log-level` - configures the verbosity of the Vault Agent
log level. Default is `info`.
- `vault.hashicorp.com/log-format` - configures the log type for Vault Agent. Possible
values are `standard` and `json`. Default is `standard`.
- `vault.hashicorp.com/namespace` - configures the Vault Enterprise namespace to
be used when requesting secrets from Vault. Also available as a command-line
option (`-vault-namespace`) or environment variable
(`AGENT_INJECT_VAULT_NAMESPACE`) to set the default namespace for all injected
Agents.
- `vault.hashicorp.com/proxy-address` - configures the HTTP proxy to use when connecting
to a Vault server.
- `vault.hashicorp.com/role` - configures the Vault role used by the Vault Agent
auto-auth method. Required when `vault.hashicorp.com/agent-configmap` is not set.
- `vault.hashicorp.com/service` - configures the Vault address for the injected
Vault Agent to use. This value overrides the default Vault address configured
in the injector, and may either be the address of a Vault service within the
same Kubernetes cluster as the injector, or an external Vault URL.
- `vault.hashicorp.com/tls-secret` - name of the Kubernetes secret containing TLS
Client and CA certificates and keys. This is mounted to `/vault/tls`.
- `vault.hashicorp.com/tls-server-name` - name of the Vault server to verify the
authenticity of the server when communicating with Vault over TLS.
- `vault.hashicorp.com/tls-skip-verify` - if true, configures the Vault Agent to
skip verification of Vault's TLS certificate. It's not recommended to set this
value to true in a production environment.
- `vault.hashicorp.com/agent-disable-idle-connections` - Comma-separated [list
of Vault Agent features](/vault/docs/agent-and-proxy/agent#disable_idle_connections) where idle
connections should be disabled. Also available as a command-line option
(`-disable-idle-connections`) or environment variable
(`AGENT_INJECT_DISABLE_IDLE_CONNECTIONS`) to set the default for all injected
Agents.
- `vault.hashicorp.com/agent-disable-keep-alives` - Comma-separated [list of
Vault Agent features](/vault/docs/agent-and-proxy/agent#disable_keep_alives) where keep-alives
should be disabled. Also available as a command-line option
(`-disable-keep-alives`) or environment variable
(`AGENT_INJECT_DISABLE_KEEP_ALIVES`) to set the default for all injected
Agents.
[k8s-resources]: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container
[shareProcessNamespace]: https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/ | vault | layout docs page title Agent Sidecar Injector Annotations description This section documents the configurable annotations for the Vault Agent Injector Annotations The following are the available annotations for the injector These annotations are organized into two sections agent and vault All of the annotations below change the configurations of the Vault Agent containers injected into the pod Agent annotations Agent annotations change the Vault Agent containers templating configuration For example agent annotations allow users to define what secrets they want how to render them optional commands to run etc vault hashicorp com agent inject configures whether injection is explicitly enabled or disabled for a pod This should be set to a true or false value Defaults to false vault hashicorp com agent inject status blocks further mutations by adding the value injected to the pod after a successful mutation vault hashicorp com agent configmap name of the configuration map where Vault Agent configuration file and templates can be found vault hashicorp com agent image name of the Vault docker image to use This value overrides the default image configured in the injector and is usually not needed Defaults to hashicorp vault 1 18 1 vault hashicorp com agent init first configures the pod to run the Vault Agent init container first if true last if false This is useful when other init containers need pre populated secrets This should be set to a true or false value Defaults to false vault hashicorp com agent inject command configures Vault Agent to run a command after the template has been rendered To map a command to a specific secret use the same unique secret name vault hashicorp com agent inject command SECRET NAME For example if a secret annotation vault hashicorp com agent inject secret foobar is configured vault hashicorp com agent inject command foobar would map a command to that secret vault hashicorp com agent inject secret configures Vault Agent to retrieve the secrets from Vault required by the container The name of the secret is any unique string after vault hashicorp com agent inject secret such as vault hashicorp com agent inject secret foobar The value is the path in Vault where the secret is located vault hashicorp com agent inject template configures the template Vault Agent should use for rendering a secret The name of the template is any unique string after vault hashicorp com agent inject template such as vault hashicorp com agent inject template foobar This should map to the same unique value provided in vault hashicorp com agent inject secret If not provided a default generic template is used vault hashicorp com agent template left delim configures the left delimiter for Vault Agent to use when rendering a secret template The name of the template is any unique string after vault hashicorp com agent template left delim such as vault hashicorp com agent template left delim foobar This should map to the same unique value provided in vault hashicorp com agent inject template If not provided a default left delimiter is used as defined by Vault Agent Template Config vault docs agent and proxy agent template left delimiter vault hashicorp com agent template right delim configures the right delimiter for Vault Agent to use when rendering a secret template The name of the template is any unique string after vault hashicorp com agent template right delim such as vault hashicorp com agent template right delim foobar This should map to the same unique value provided in vault hashicorp com agent inject template If not provided a default right delimiter is used as defined by Vault Agent Template Config vault docs agent and proxy agent template right delimiter vault hashicorp com error on missing key configures whether Vault Agent should exit with an error when accessing a struct or map field key that does not exist The name of the secret is the string after vault hashicorp com error on missing key and should map to the same unique value provided in vault hashicorp com agent inject secret Defaults to false See Vault Agent Template Config vault docs agent and proxy agent template template configurations for more details vault hashicorp com agent inject containers comma separated list that specifies in which containers the secrets volume should be mounted If not provided the secrets volume will be mounted in all containers in the pod vault hashicorp com secret volume path configures where on the filesystem a secret will be rendered To map a path to a specific secret use the same unique secret name vault hashicorp com secret volume path SECRET NAME For example if a secret annotation vault hashicorp com agent inject secret foobar is configured vault hashicorp com secret volume path foobar would configure where that secret is rendered If no secret name is provided this sets the default for all rendered secrets in the pod vault hashicorp com agent inject file configures the filename and path in the secrets volume where a Vault secret will be written This should be used with vault hashicorp com secret volume path which mounts a memory volume to the specified path If secret volume path is used the path can be omitted from this value To map a filename to a specific secret use the same unique secret name vault hashicorp com agent inject file SECRET NAME For example if a secret annotation vault hashicorp com agent inject secret foobar is configured vault hashicorp com agent inject file foobar would configure the filename vault hashicorp com agent inject perms configures the permissions of the file to create in the secrets volume The name of the secret is the string after vault hashicorp com agent inject perms and should map to the same unique value provided in vault hashicorp com agent inject secret The value is the octal permission for example 0644 vault hashicorp com agent inject template file configures the path and filename of the custom template to use This should be used with vault hashicorp com extra secret which mounts a Kubernetes secret to vault custom To map a template file to a specific secret use the same unique secret name vault hashicorp com agent inject template file SECRET NAME For example if a secret annotation vault hashicorp com agent inject secret foobar is configured vault hashicorp com agent inject template file foobar would configure the template file vault hashicorp com agent inject default template configures the default template type for rendering secrets if no custom template is defined Possible values include map and json Defaults to map vault hashicorp com template config exit on retry failure controls whether Vault Agent exits after it has exhausted its number of template retry attempts due to failures Defaults to true See Vault Agent Template Config vault docs agent and proxy agent template global configurations for more details vault hashicorp com template static secret render interval If specified configures how often Vault Agent Template should render non leased secrets such as KV v2 See Vault Agent Template Config vault docs agent and proxy agent template global configurations for more details vault hashicorp com template max connections per host If specified limits the total number of connections that the Vault Agent templating engine can use for a particular Vault host The connection limit includes all connections in the dialing active and idle states See Vault Agent Template Config vault docs agent and proxy agent template global configurations for more details vault hashicorp com agent extra secret mounts Kubernetes secret as a volume at vault custom in the sidecar init containers Useful for custom Agent configs with auto auth methods such as approle that require paths to secrets be present vault hashicorp com agent inject token configures Vault Agent to share the Vault token with other containers in the pod in a file named token in the root of the secrets volume i e vault secrets token This is helpful when other containers communicate directly with Vault but require auto authentication provided by Vault Agent This should be set to a true or false value Defaults to false vault hashicorp com agent limits cpu configures the CPU limits on the Vault Agent containers Defaults to 500m Setting this to an empty string disables CPU limits vault hashicorp com agent limits mem configures the memory limits on the Vault Agent containers Defaults to 128Mi Setting this to an empty string disables memory limits vault hashicorp com agent limits ephemeral configures the ephemeral storage limit on the Vault Agent containers Defaults to unset which disables ephemeral storage limits Also available as a command line option ephemeral storage limit or environment variable AGENT INJECT EPHEMERAL LIMIT to set the default for all injected Agent containers Note Pod limits are equal to the sum of all container limits Setting this limit without setting it for other containers will also affect the limits of other containers in the pod See Kubernetes resources documentation k8s resources for more details vault hashicorp com agent requests cpu configures the CPU requests on the Vault Agent containers Defaults to 250m Setting this to an empty string disables CPU requests vault hashicorp com agent requests mem configures the memory requests on the Vault Agent containers Defaults to 64Mi Setting this to an empty string disables memory requests vault hashicorp com agent requests ephemeral configures the ephemeral storage requests on the Vault Agent Containers Defaults to unset which disables ephemeral storage requests and will default to the ephemeral limit if set Also available as a command line option ephemeral storage request or environment variable AGENT INJECT EPHEMERAL REQUEST to set the default for all injected Agent containers Note Pod requests are equal to the sum of all container requests Setting this limit without setting it for other containers will also affect the requests of other containers in the pod See Kubernetes resources documentation k8s resources for more details vault hashicorp com agent revoke on shutdown configures whether the sidecar will revoke it s own token before shutting down This setting will only be applied to the Vault Agent sidecar container This should be set to a true or false value Defaults to false vault hashicorp com agent revoke grace configures the grace period in seconds for revoking it s own token before shutting down This setting will only be applied to the Vault Agent sidecar container Defaults to 5s vault hashicorp com agent pre populate configures whether an init container is included to pre populate the shared memory volume with secrets prior to the containers starting This should be set to a true or false value Defaults to true vault hashicorp com agent pre populate only configures whether an init container is the only injected container If true no sidecar container will be injected at runtime of the pod Enabling this option is recommended for workloads of type CronJob or Job to ensure a clean pod termination vault hashicorp com preserve secret case configures Vault Agent to preserve the secret name case when creating the secret files This should be set to a true or false value Defaults to false vault hashicorp com agent run as user sets the user uid to run Vault agent as Also available as a command line option run as user or environment variable AGENT INJECT RUN AS USER for the injector Defaults to 100 vault hashicorp com agent run as group sets the group gid to run Vault agent as Also available as a command line option run as group or environment variable AGENT INJECT RUN AS GROUP for the injector Defaults to 1000 vault hashicorp com agent set security context controls whether SecurityContext is set in injected containers Also available as a command line option set security context or environment variable AGENT INJECT SET SECURITY CONTEXT Defaults to true vault hashicorp com agent run as same user run the injected Vault agent containers as the User uid of the first application container in the pod Requires Spec Containers 0 SecurityContext RunAsUser to be set in the pod spec Also available as a command line option run as same user or environment variable AGENT INJECT RUN AS SAME USER Defaults to false Note If the first application container in the pod is running as root uid 0 the run as same user annotation will fail injection with an error vault hashicorp com agent share process namespace sets shareProcessNamespace in the Pod spec where Vault Agent is injected Defaults to false vault hashicorp com agent cache enable configures Vault Agent to enable caching vault docs agent and proxy agent caching In Vault 1 7 this annotation will also enable a Vault Agent persistent cache This persistent cache will be shared between the init and sidecar container to reuse tokens and leases retrieved by the init container Defaults to false vault hashicorp com agent cache use auto auth token configures Vault Agent cache to authenticate on behalf of the requester Set to force to enable Disabled by default vault hashicorp com agent cache listener port configures Vault Agent cache listening port Defaults to 8200 vault hashicorp com agent copy volume mounts copies the mounts from the specified container and mounts them to the Vault Agent containers The service account volume is ignored vault hashicorp com agent service account token volume name the optional name of a projected volume containing a service account token for use with auto auth against Vault s Kubernetes auth method If the volume is mounted to another container in the deployment the token volume will be mounted to the same location in the vault agent containers Otherwise it will be mounted at the default location of var run secrets vault hashicorp com serviceaccount vault hashicorp com agent enable quit enable the agent v1 quit endpoint vault docs agent and proxy agent quit on an injected agent This option defaults to false and if true will be set on the existing cache listener or a new localhost listener with a basic cache stanza configured The agent cache listener port annotation vault docs platform k8s injector annotations vault hashicorp com agent cache listener port can be used to change the port vault hashicorp com agent telemetry specifies the telemetry vault docs configuration telemetry configuration for the Vault Agent sidecar The name of the config is any unique string after vault hashicorp com agent telemetry such as vault hashicorp com agent telemetry prometheus retention time This annotation can be reused multiple times to configure multiple settings for the agent telemetry vault hashicorp com go max procs set the GOMAXPROCS environment variable for injected agents vault hashicorp com agent json patch change the injected agent sidecar container using a JSON patch https jsonpatch com before it is created This can be used to add remove or modify any attribute of the container For example setting this to op replace path name value different name will update the agent container s name to be different name instead of the default vault agent vault hashicorp com agent init json patch same as vault hashicorp com agent json patch except that the JSON patch will be applied to the injected init container instead Vault annotations Vault annotations change how the Vault Agent containers communicate with Vault For example Vault s address TLS certificates to use client parameters such as timeouts etc vault hashicorp com auth config configures additional parameters for the configured authentication method The name of the config is any unique string after vault hashicorp com auth config such as vault hashicorp com auth config role id file path This annotation can be reused multiple times to configure multiple settings for the authentication method Some authentication methods may require additional secrets and should be mounted via the vault hashicorp com agent extra secret annotation For a list of valid authentication configurations see the Vault Agent auto auth documentation vault docs agent and proxy autoauth methods vault hashicorp com auth path configures the authentication path for the Kubernetes auth method Defaults to auth kubernetes vault hashicorp com auth type configures the authentication type for Vault Agent Defaults to kubernetes For a list of valid authentication methods see the Vault Agent auto auth documentation vault docs agent and proxy autoauth methods vault hashicorp com auth min backoff set the min backoff vault docs agent and proxy autoauth min backoff option in the auto auth config Requires Vault 1 11 vault hashicorp com auth max backoff set the max backoff vault docs agent and proxy autoauth max backoff option in the auto auth config vault hashicorp com agent auto auth exit on err set the exit on err vault docs agent and proxy autoauth exit on err option in the auto auth config vault hashicorp com ca cert path of the CA certificate used to verify Vault s TLS This can also be set as the default for all injected Agents via the AGENT INJECT VAULT CACERT BYTES environment variable which takes a PEM encoded certificate or bundle vault hashicorp com ca key path of the CA public key used to verify Vault s TLS vault hashicorp com client cert path of the client certificate used when communicating with Vault via mTLS vault hashicorp com client key path of the client public key used when communicating with Vault via mTLS vault hashicorp com client max retries configures number of Vault Agent retry attempts when certain errors are encountered Defaults to 2 for 3 total attempts Set this to 0 or less to disable retrying Error codes that are retried are 412 client consistency requirement not satisfied and all 5xx except for 501 not implemented vault hashicorp com client timeout configures the request timeout threshold in seconds of the Vault Agent when communicating with Vault Defaults to 60s and accepts value types of 60 60s or 1m vault hashicorp com log level configures the verbosity of the Vault Agent log level Default is info vault hashicorp com log format configures the log type for Vault Agent Possible values are standard and json Default is standard vault hashicorp com namespace configures the Vault Enterprise namespace to be used when requesting secrets from Vault Also available as a command line option vault namespace or environment variable AGENT INJECT VAULT NAMESPACE to set the default namespace for all injected Agents vault hashicorp com proxy address configures the HTTP proxy to use when connecting to a Vault server vault hashicorp com role configures the Vault role used by the Vault Agent auto auth method Required when vault hashicorp com agent configmap is not set vault hashicorp com service configures the Vault address for the injected Vault Agent to use This value overrides the default Vault address configured in the injector and may either be the address of a Vault service within the same Kubernetes cluster as the injector or an external Vault URL vault hashicorp com tls secret name of the Kubernetes secret containing TLS Client and CA certificates and keys This is mounted to vault tls vault hashicorp com tls server name name of the Vault server to verify the authenticity of the server when communicating with Vault over TLS vault hashicorp com tls skip verify if true configures the Vault Agent to skip verification of Vault s TLS certificate It s not recommended to set this value to true in a production environment vault hashicorp com agent disable idle connections Comma separated list of Vault Agent features vault docs agent and proxy agent disable idle connections where idle connections should be disabled Also available as a command line option disable idle connections or environment variable AGENT INJECT DISABLE IDLE CONNECTIONS to set the default for all injected Agents vault hashicorp com agent disable keep alives Comma separated list of Vault Agent features vault docs agent and proxy agent disable keep alives where keep alives should be disabled Also available as a command line option disable keep alives or environment variable AGENT INJECT DISABLE KEEP ALIVES to set the default for all injected Agents k8s resources https kubernetes io docs concepts configuration manage resources containers resource requests and limits of pod and container shareProcessNamespace https kubernetes io docs tasks configure pod container share process namespace |
vault Installing the agent injector The Vault Helm chart vault docs platform k8s helm is the recommended way to layout docs install and configure the Agent Injector in Kubernetes The Vault Agent Sidecar Injector can be installed using Vault Helm page title Agent Sidecar Injector Installation | ---
layout: docs
page_title: Agent Sidecar Injector Installation
description: The Vault Agent Sidecar Injector can be installed using Vault Helm.
---
# Installing the agent injector
The [Vault Helm chart](/vault/docs/platform/k8s/helm) is the recommended way to
install and configure the Agent Injector in Kubernetes.
~> The Vault Agent Injector requires Vault 1.3.1 or greater.
To install a new instance of Vault and the Vault Agent Injector, first add the
Hashicorp helm repository and ensure you have access to the chart:
@include 'helm/repo.mdx'
Then install the chart and enable the injection feature by setting the
`injector.enabled` value to `true`:
```bash
helm install vault hashicorp/vault --set="injector.enabled=true"
```
Upgrades may be performed with `helm upgrade` on an existing install. Please
always run Helm with `--dry-run` before any install or upgrade to verify
changes.
You can see all the available values settings by running `helm inspect values hashicorp/vault` or by reading the [Vault Helm Configuration
Docs](/vault/docs/platform/k8s/helm/configuration). Commonly used values in the Helm
chart include limiting the namespaces the injector runs in, TLS options and
more.
## TLS options
Admission webhook controllers require TLS to run within Kubernetes. The Injector
defaults to supporting TLS 1.2 and above, and supports configuring the minimum
supported TLS version and list of enabled cipher suites. These can be set via
the following environment variables:
| Environment variable | Description |
| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| `AGENT_INJECT_TLS_MIN_VERSION` | Minimum supported version of TLS. Defaults to **tls12**. Accepted values are `tls10`, `tls11`, `tls12`, or `tls13`. |
| `AGENT_INJECT_TLS_CIPHER_SUITES` | Comma-separated list of enabled [cipher suites][tls-suites] for TLS 1.0-1.2. (Cipher suites are not configurable for TLS 1.3.) |
~> **Warning**: TLS 1.1 and lower are generally considered insecure.
These may be set in a Helm chart deployment via the
[injector.extraEnvironmentVars](/vault/docs/platform/k8s/helm/configuration#extraenvironmentvars)
option:
```bash
helm install vault hashicorp/vault \
--set="injector.extraEnvironmentVars.AGENT_INJECT_TLS_MIN_VERSION=tls13" \
--set="injector.extraEnvironmentVars.AGENT_INJECT_TLS_CIPHER_SUITES=..."
```
The Vault Agent Injector also supports two TLS management options:
- Auto TLS generation (default)
- Manual TLS
### Auto TLS
By default, the Vault Agent Injector will bootstrap TLS by generating a certificate
authority and creating a certificate/key to be used by the controller. If using
Vault Helm, the chart will automatically create the necessary DNS entries for the
controller's service used to verify the certificate.
### Manual TLS
If desired, users can supply their own TLS certificates, key and certificate authority.
The following is required to configure TLS manually:
- Server certificate/key
- Base64 PEM encoded Certificate Authority bundle
For more information on configuring manual TLS, see the [Vault Helm cert values](/vault/docs/platform/k8s/helm/configuration#certs).
This option may also be used in conjunction with [cert-manager for certificate management](/vault/docs/platform/k8s/helm/examples/injector-tls-cert-manager).
## Multiple replicas and TLS
The Vault Agent Injector can be run with multiple replicas if using [Manual
TLS](#manual-tls) or [cert-manager](/vault/docs/platform/k8s/helm/examples/injector-tls-cert-manager), and as of v0.7.0 multiple replicas are also supported with
[Auto TLS](#auto-tls). The number of replicas is controlled in the Vault Helm
chart by the [injector.replicas
value](/vault/docs/platform/k8s/helm/configuration#replicas).
With Auto TLS and multiple replicas, a leader replica is determined by ownership
of a ConfigMap named `vault-k8s-leader`. Another replica can become the leader
once the current leader replica stops running, and the Kubernetes garbage
collector deletes the ConfigMap. The leader replica is in charge of generating
the CA and patching the webhook caBundle in Kubernetes, and also generating and
distributing the certificate and key to the "followers". The followers read the
certificate and key needed for the webhook service listener from a Kubernetes
Secret, which is updated by the leader when a certificate is near expiration.
With Manual TLS and multiple replicas,
[injector.leaderElector.enabled](/vault/docs/platform/k8s/helm/configuration#enabled-2)
can be set to `false` since leader determination is not necessary in this case.
## Namespace selector
By default, the Vault Agent Injector will process all namespaces in Kubernetes except
the system namespaces `kube-system` and `kube-public`. To limit what namespaces
the injector can work in a namespace selector can be defined to match labels attached
to namespaces.
For more information on configuring namespace selection, see the [Vault Helm namespaceSelector value](/vault/docs/platform/k8s/helm/configuration#namespaceselector).
[tls-suites]: https://golang.org/src/crypto/tls/cipher_suites.go | vault | layout docs page title Agent Sidecar Injector Installation description The Vault Agent Sidecar Injector can be installed using Vault Helm Installing the agent injector The Vault Helm chart vault docs platform k8s helm is the recommended way to install and configure the Agent Injector in Kubernetes The Vault Agent Injector requires Vault 1 3 1 or greater To install a new instance of Vault and the Vault Agent Injector first add the Hashicorp helm repository and ensure you have access to the chart include helm repo mdx Then install the chart and enable the injection feature by setting the injector enabled value to true bash helm install vault hashicorp vault set injector enabled true Upgrades may be performed with helm upgrade on an existing install Please always run Helm with dry run before any install or upgrade to verify changes You can see all the available values settings by running helm inspect values hashicorp vault or by reading the Vault Helm Configuration Docs vault docs platform k8s helm configuration Commonly used values in the Helm chart include limiting the namespaces the injector runs in TLS options and more TLS options Admission webhook controllers require TLS to run within Kubernetes The Injector defaults to supporting TLS 1 2 and above and supports configuring the minimum supported TLS version and list of enabled cipher suites These can be set via the following environment variables Environment variable Description AGENT INJECT TLS MIN VERSION Minimum supported version of TLS Defaults to tls12 Accepted values are tls10 tls11 tls12 or tls13 AGENT INJECT TLS CIPHER SUITES Comma separated list of enabled cipher suites tls suites for TLS 1 0 1 2 Cipher suites are not configurable for TLS 1 3 Warning TLS 1 1 and lower are generally considered insecure These may be set in a Helm chart deployment via the injector extraEnvironmentVars vault docs platform k8s helm configuration extraenvironmentvars option bash helm install vault hashicorp vault set injector extraEnvironmentVars AGENT INJECT TLS MIN VERSION tls13 set injector extraEnvironmentVars AGENT INJECT TLS CIPHER SUITES The Vault Agent Injector also supports two TLS management options Auto TLS generation default Manual TLS Auto TLS By default the Vault Agent Injector will bootstrap TLS by generating a certificate authority and creating a certificate key to be used by the controller If using Vault Helm the chart will automatically create the necessary DNS entries for the controller s service used to verify the certificate Manual TLS If desired users can supply their own TLS certificates key and certificate authority The following is required to configure TLS manually Server certificate key Base64 PEM encoded Certificate Authority bundle For more information on configuring manual TLS see the Vault Helm cert values vault docs platform k8s helm configuration certs This option may also be used in conjunction with cert manager for certificate management vault docs platform k8s helm examples injector tls cert manager Multiple replicas and TLS The Vault Agent Injector can be run with multiple replicas if using Manual TLS manual tls or cert manager vault docs platform k8s helm examples injector tls cert manager and as of v0 7 0 multiple replicas are also supported with Auto TLS auto tls The number of replicas is controlled in the Vault Helm chart by the injector replicas value vault docs platform k8s helm configuration replicas With Auto TLS and multiple replicas a leader replica is determined by ownership of a ConfigMap named vault k8s leader Another replica can become the leader once the current leader replica stops running and the Kubernetes garbage collector deletes the ConfigMap The leader replica is in charge of generating the CA and patching the webhook caBundle in Kubernetes and also generating and distributing the certificate and key to the followers The followers read the certificate and key needed for the webhook service listener from a Kubernetes Secret which is updated by the leader when a certificate is near expiration With Manual TLS and multiple replicas injector leaderElector enabled vault docs platform k8s helm configuration enabled 2 can be set to false since leader determination is not necessary in this case Namespace selector By default the Vault Agent Injector will process all namespaces in Kubernetes except the system namespaces kube system and kube public To limit what namespaces the injector can work in a namespace selector can be defined to match labels attached to namespaces For more information on configuring namespace selection see the Vault Helm namespaceSelector value vault docs platform k8s helm configuration namespaceselector tls suites https golang org src crypto tls cipher suites go |
vault Vault lambda extension AWS Lambda lets you run code without provisioning and managing servers page title Vault Lambda Extension layout docs The Vault Lambda Extension allows a Lambda function to read secrets from a Vault deployment | ---
layout: docs
page_title: Vault Lambda Extension
description: >-
The Vault Lambda Extension allows a Lambda function to read secrets from a Vault deployment.
---
# Vault lambda extension
AWS Lambda lets you run code without provisioning and managing servers.
The [Vault Lambda Extension](https://github.com/hashicorp/vault-lambda-extension) utilizes the AWS Lambda Extensions API to help your Lambda function read secrets from your Vault deployment.
You can use the [quick-start](https://github.com/hashicorp/vault-lambda-extension/tree/main/quick-start) directory which has an end-to-end example if you would like to try out the extension from scratch.
~> **Note**: If you decide to create one from scratch, be aware that this will create real infrastructure with an associated cost as per AWS' pricing.
## Usage
To use the extension, include one of the following ARNs as a layer in your
Lambda function, depending on your desired architecture.
amd64 (x86_64):
```text
arn:aws:lambda:<your-region>:634166935893:layer:vault-lambda-extension:18
```
arm64:
```text
arn:aws:lambda:<your-region>:634166935893:layer:vault-lambda-extension-arm64:6
```
Where region may be any of `af-south-1`, `ap-east-1`, `ap-northeast-1`,
`ap-northeast-2`, `ap-northeast-3`, `ap-south-1`, `ap-south-2`, `ap-southeast-1`,
`ap-southeast-2`, `ca-central-1`, `eu-central-1`, `eu-north-1`, `eu-south-1`,
`eu-west-1`, `eu-west-2`, `eu-west-3`, `me-south-1`, `sa-east-1`, `us-east-1`,
`us-east-2`, `us-west-1`, `us-west-2`.
The extension authenticates with Vault using [AWS IAM auth](/vault/docs/auth/aws),
and all configuration is supplied via environment variables. There are two methods
to read secrets, which can both be used side-by-side:
- **Recommended**: Make unauthenticated requests to the extension's local proxy
server at `http://127.0.0.1:8200`, which will add an authentication header and
proxy to the configured `VAULT_ADDR`. Responses from Vault are returned without
modification.
- Configure environment variables such as `VAULT_SECRET_PATH` for the extension
to read a secret and write it to disk.
### Adding the extension to your existing lambda and Vault infrastructure
#### Requirements
- ARN of the role your Lambda runs as
- An instance of Vault accessible from AWS Lambda
- An authenticated `vault` client
- A secret in Vault that you want your Lambda to access, and a policy giving read access to it
- Your Lambda function must use one of the [supported runtimes](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-extensions-api.html) for extensions
#### Step 1. configure Vault
Enable the aws auth method.
```shell-session
$ vault auth enable aws
```
Configure the AWS client to use the default options.
```shell-session
$ vault write -force auth/aws/config/client
```
Create a role prefixed with the AWS environment name.
```shell-session
$ vault write auth/aws/role/vault-lambda-role \
auth_type=iam \
bound_iam_principal_arn="${YOUR_ARN}" \
policies="${YOUR_POLICY}" \
ttl=1h
```
#### Step 2. option a) install the extension for lambda functions packaged in zip archives
If you deploy your Lambda function as a zip file, you can add the extension
to your Lambda layers using the console or [cli](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html#configuration-layers-using):
```text
arn:aws:lambda:<your-region>:634166935893:layer:vault-lambda-extension:11
```
#### Step 2. option b) install the extension for lambda functions packaged in container images
Alternatively, if you deploy your Lambda function as a container image, simply
place the built binary in the `/opt/extensions` directory of your image.
Fetch the binary from
[releases.hashicorp.com](https://releases.hashicorp.com/vault-lambda-extension/).
The following command requires cURL.
```shell-session
$ curl --silent https://releases.hashicorp.com/vault-lambda-extension/0.5.0/vault-lambda-extension_0.5.0_linux_amd64.zip \
--output vault-lambda-extension.zip
```
Unzip the downloaded binary.
```shell-session
$ unzip vault-lambda-extension.zip
```
Optionally, you can verify the integrity of the downloaded zip using the release
archive checksum verification instructions
[here](https://www.hashicorp.com/security).
Or to build the binary from source. This requires Golang installed. Run from the root of this repository.
```shell-session
$ GOOS=linux GOARCH=amd64 go build -o vault-lambda-extension main.go
```
#### Step 3. configure vault-lambda-extension
Configure the extension using [Lambda environment
variables](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html):
Set the Vault API address.
```shell-session
$ VAULT_ADDR=http://vault.example.com:8200
```
Set the AWS IAM auth mount point (i.e. the path segment after `auth/` from above).
```shell-session
$ VAULT_AUTH_PROVIDER=aws
```
Set the Vault role to authenticate as. Must be configured for the ARN of your
Lambda's role.
```shell-session
$ VAULT_AUTH_ROLE=vault-lambda-role
```
The path to a secret in Vault. Can be static or dynamic. Unless
VAULT_SECRET_FILE is specified, JSON response will be written to
`/tmp/vault/secret.json`.
```shell-session
$ VAULT_SECRET_PATH=secret/lambda-app/token
```
If everything is correctly set up, your Lambda function can then read secret
material from `/tmp/vault/secret.json`. The exact contents of the JSON object
will depend on the secret read, but its schema is the [Secret struct](https://github.com/hashicorp/vault/blob/api/v1.0.4/api/secret.go#L15)
from the Vault API module.
Alternatively, you can send normal Vault API requests over HTTP to the local
proxy at `http://127.0.0.1:8200`, and the extension will add authentication
before forwarding the request. Vault responses will be returned unmodified.
Although local communication is over plain HTTP, the proxy server will use TLS
to communicate with Vault if configured to do so as detailed below.
## Configuration
The extension is configured via [Lambda environment variables](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html).
Most of the [Vault CLI client's environment variables](/vault/docs/commands#environment-variables) are available,
as well as some additional variables to configure auth, which secret(s) to read and
where to write secrets.
| Environment variable | Description | Required | Example value |
|-----------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------|
| `VLE_VAULT_ADDR` | Vault address to connect to. Takes precedence over `VAULT_ADDR` so that clients of the proxy server can be configured using the standard `VAULT_ADDR` | No | `https://x.x.x.x:8200` |
| `VAULT_ADDR` | Vault address to connect to if `VLE_VAULT_ADDR` is not set. Required if `VLE_VAULT_ADDR` is not set | No | `https://x.x.x.x:8200` |
| `VAULT_AUTH_PROVIDER` | Name of the configured AWS IAM auth route on Vault | Yes | `aws` |
| `VAULT_AUTH_ROLE` | Vault role to authenticate as | Yes | `lambda-app` |
| `VAULT_IAM_SERVER_ID` | Value to pass to the Vault server via the [`X-Vault-AWS-IAM-Server-ID` HTTP Header for AWS Authentication](/vault/api-docs/auth/aws#iam_server_id_header_value) | No | `vault.example.com` |
| `VAULT_SECRET_PATH` | Secret path to read, written to `/tmp/vault/secret.json` unless `VAULT_SECRET_FILE` is specified | No | `database/creds/lambda-app` |
| `VAULT_SECRET_FILE` | Path to write the JSON response for `VAULT_SECRET_PATH` | No | `/tmp/db.json` |
| `VAULT_SECRET_PATH_FOO` | Additional secret path to read, where FOO can be any name, as long as a matching `VAULT_SECRET_FILE_FOO` is specified | No | `secret/lambda-app/token` |
| `VAULT_SECRET_FILE_FOO` | Must exist for any correspondingly named `VAULT_SECRET_PATH_FOO`. Name has no further effect beyond matching to the correct path variable | No | `/tmp/token` |
| `VAULT_RUN_MODE` | Available options are `default`, `proxy`, and `file`. Proxy mode makes requests to the extension's local proxy server. File mode configures the extension to read and write secrets to disk. Default mode uses both file and proxy mode. The default is `default`. | No | `default` |
| `VAULT_TOKEN_EXPIRY_GRACE_PERIOD` | Period at the end of the proxy server's auth token TTL where it will consider the token expired and attempt to re-authenticate to Vault. Must have a unit and be parseable by `time.Duration`. Defaults to 10s. | No | `1m` |
| `VAULT_STS_ENDPOINT_REGION` | The region of the STS regional endpoint to authenticate with. If the AWS IAM auth mount specified uses a regional STS endpoint, then this needs to match the region of that endpoint. Defaults to using the global endpoint, or the region the Lambda resides in if `AWS_STS_REGIONAL_ENDPOINTS` is set to `regional` | No | `eu-west-1` |
The remaining environment variables are not required, and function exactly as
described in the [Vault Commands (CLI)](/vault/docs/commands#environment-variables) documentation. However,
note that `VAULT_CLIENT_TIMEOUT` cannot extend the timeout beyond the 10s
initialization timeout imposed by the Extensions API when writing files to disk.
| Environment variable | Description | Required | Example value |
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------------- |
| `VAULT_CACERT` | Path to a PEM-encoded CA certificate _file_ on the local disk | No | `/tmp/ca.crt` |
| `VAULT_CAPATH` | Path to a _directory_ of PEM-encoded CA certificate files on the local disk | No | `/tmp/certs` |
| `VAULT_CLIENT_CERT` | Path to a PEM-encoded client certificate on the local disk | No | `/tmp/client.crt` |
| `VAULT_CLIENT_KEY` | Path to an unencrypted, PEM-encoded private key on disk which corresponds to the matching client certificate | No | `/tmp/client.key` |
| `VAULT_CLIENT_TIMEOUT` | Timeout for Vault requests. Default value is 60s. Ignored by proxy server. **Any value over 10s will exceed the Extensions API timeout and therefore have no effect** | No | `5s` |
| `VAULT_MAX_RETRIES` | Maximum number of retries on `5xx` error codes. Defaults to 2. Ignored by proxy server | No | `2` |
| `VAULT_SKIP_VERIFY` | Do not verify Vault's presented certificate before communicating with it. Setting this variable is not recommended and voids Vault's [security model](/vault/docs/internals/security) | No | `true` |
| `VAULT_TLS_SERVER_NAME` | Name to use as the SNI host when connecting via TLS | No | `vault.example.com` |
| `VAULT_RATE_LIMIT` | Only applies to a single invocation of the extension. See [Vault Commands (CLI)](/vault/docs/commands#environment-variables) documentation for details. Ignored by proxy server | No | `10` |
| `VAULT_NAMESPACE` | The namespace to use for pre-configured secrets. Ignored by proxy server | No | `education` |
| `VAULT_DEFAULT_CACHE_TTL` | The time to live configuration (aka, TTL) of the cache used by proxy server. Must have a unit and be parsable as a time.Duration. Required for caching to be enabled. | No | `15m` |
| `VAULT_DEFAULT_CACHE_ENABLED` | Enable caching for all requests, without needing to set the X-Vault-Cache-Control header for each request. Must be set to a boolean value. | No | `true` |
| `VAULT_ASSUMED_ROLE_ARN` | Valid ARN of an IAM role that can be assumed by the execution role assigned to your Lambda function. | No | `arn:aws:iam::123456789012:role/xaccounts3access`
| `VAULT_LOG_LEVEL` | Log verbosity level, one of TRACE, DEBUG, INFO, WARN, ERROR, OFF. Defaults to INFO. | No | `DEBUG`
### AWS STS client configuration
In addition to Vault configuration, you can configure certain aspects of the STS
client the extension uses through the usual AWS environment variables. For example,
if your Vault instance's IAM auth is configured to use regional STS endpoints:
```shell-session
$ vault write auth/aws/config/client \
sts_endpoint="https://sts.eu-west-1.amazonaws.com" \
sts_region="eu-west-1"
```
Then you may need to configure the extension's STS client to also use the regional
STS endpoint by setting `AWS_STS_REGIONAL_ENDPOINTS=regional`, because both the AWS Golang
SDK and Vault IAM auth method default to using the global endpoint in many regions.
See documentation on [`sts_regional_endpoints`](https://docs.aws.amazon.com/credref/latest/refdocs/setting-global-sts_regional_endpoints.html) for more information.
### Caching
Caching can be configured for the extension's local proxy server so that it does
not forward every HTTP request to Vault. The main consideration behind caching
design is to make caching an explicit opt-in at the request level, so that it is
only enabled for scenarios where caching makes sense without negative impact in
others. To turn on caching, set the environment variable
`VAULT_DEFAULT_CACHE_TTL` to a valid value that is parsable as a time.Duration
in Go, for example, "15m", "1h", "2m3s" or "1h2m3s", depending on application
needs. An invalid or negative value will be treated the same as a missing value,
in which case, caching will not be set up and enabled.
Then requests with HTTP method of "GET", and the HTTP header
`X-Vault-Cache-Control: cache` will be returned directly from the cache if
there's a cache hit. On a cache miss the request will be forwarded to Vault and
the response returned and cached. If the header is set to
`X-Vault-Cache-Control: recache`, the cache lookup will be skipped, and the
request will be forwarded to Vault and the response returned and cached.
Currently, the cache key is a hash of the request URL path, headers, body, and
token.
<Warning title="Nonstandard distributed tracing headers may negate the cache">
The Vault Lambda Extension cache key includes headers from proxy requests, but
excludes the standard distributed tracing headers `traceparent` and
`tracestate` because trace IDs are unique per request and would lead to unique
hashes for repeated requests.
Some distributed tracing tools may add nonstandard tracing headers, which can
also lead to individualized hashes that make repeated requests unique and
cause cache misses.
</Warning>
Caching may also be enabled for all requests by setting the environment variable
`VAULT_DEFAULT_CACHE_ENABLED` to `true`. Then all requests will be fetched and/or
cached as though the header `X-Vault-Cache-Control: cache` was present. Setting
the header to `nocache` on a request will opt-out of caching entirely in this
configuration. Setting the header to `recache` will skip the cache lookup and
return and cache the response from Vault as described previously.
~> **Warning!** The Vault Lambda Extension's cache is only in-memory
and will not be persisted when the Lambda execution environment shuts down.
In order words, the cache TTL is capped to the duration of the Lambda execution environment.
## Limitations
Secrets written to disk or returned from the proxy server will not be automatically
refreshed when they expire. This is particularly important if you configure the
extension to write secrets to disk, because the extension will only write to disk
once per execution environment, rather than once per function invocation. If you
use [provisioned concurrency](https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html#configuration-concurrency-provisioned) or if your Lambda
is invoked often enough that execution contexts live beyond the lifetime of the
secret, then secrets on disk are likely to become invalid.
In line with [Lambda best practices](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html), we recommend avoiding
writing secrets to disk where possible, and exclusively consuming secrets via
the proxy server. However, the proxy server will still not perform any additional
processing with returned secrets such as automatic lease renewal. The proxy server's
own Vault auth token is the only thing that gets automatically refreshed. It will
synchronously refresh its own token before proxying requests if the token is
expired (including a grace window), and it will attempt to renew its token if the
token is nearly expired but renewable. The proxy will also immediately refresh its token
if the incoming request header `X-Vault-Token-Options: revoke` is present.
<Note title="Not SnapStart compatible">
The Vault Lambda extension does not currently work with
[AWS SnapStart](https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html).
</Note>
## Performance impact
AWS Lambda pricing is based on [number of invocations, time of execution and memory
used](https://aws.amazon.com/lambda/pricing/). The following table details some approximate performance
related statistics to help assess the cost impact of this extension. Note that AWS
Lambda allocates [CPU power in proportion to memory](https://docs.aws.amazon.com/lambda/latest/dg/configuration-memory.html) so results
will vary widely. These benchmarks were run with the minimum 128MB of memory allocated
so aim to give an approximate baseline.
| Metric | Value | Description | Derivation |
| -------------- | ---------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- |
| Layer size | 8.5MB | The size of the unpacked extension binary | `ls -la` |
| Init latency | 8.5ms (standard deviation 2.4ms) + one network round trip to authenticate to Vault | Extension initialization time in a new execution environment. Authentication round trip time will be highly deployment-dependent | Instrumented in code |
| Invoke latency | <1ms | The base processing time for each function invocation, assuming no calls to the proxy server | Instrumented in code |
| Memory impact | 12MB | The marginal impact on "Max Memory Used" when running the extension | As reported by Lambda when running Hello World function with and without extension |
## Uploading to your own AWS account and region
If you would like to upload the extension as a Lambda layer in your own AWS
account and region, you can do the following:
```shell-session
$ curl --silent https://releases.hashicorp.com/vault-lambda-extension/0.5.0/vault-lambda-extension_0.5.0_linux_amd64.zip \
--output vault-lambda-extension.zip
```
Set your target AWS region.
```shell-session
$ export REGION="YOUR REGION HERE"
```
Upload the extension as a Lambda layer.
```shell-session
$ aws lambda publish-layer-version \
--layer-name vault-lambda-extension \
--zip-file "fileb://vault-lambda-extension.zip" \
--region "${REGION}"
```
## Tutorial
For step-by-step instructions, refer to the [Vault AWS Lambda Extension](/vault/tutorials/app-integration/aws-lambda) tutorial for details on how to create an AWS Lambda function and use the Vault Lambda Extension to authenticate with Vault. | vault | layout docs page title Vault Lambda Extension description The Vault Lambda Extension allows a Lambda function to read secrets from a Vault deployment Vault lambda extension AWS Lambda lets you run code without provisioning and managing servers The Vault Lambda Extension https github com hashicorp vault lambda extension utilizes the AWS Lambda Extensions API to help your Lambda function read secrets from your Vault deployment You can use the quick start https github com hashicorp vault lambda extension tree main quick start directory which has an end to end example if you would like to try out the extension from scratch Note If you decide to create one from scratch be aware that this will create real infrastructure with an associated cost as per AWS pricing Usage To use the extension include one of the following ARNs as a layer in your Lambda function depending on your desired architecture amd64 x86 64 text arn aws lambda your region 634166935893 layer vault lambda extension 18 arm64 text arn aws lambda your region 634166935893 layer vault lambda extension arm64 6 Where region may be any of af south 1 ap east 1 ap northeast 1 ap northeast 2 ap northeast 3 ap south 1 ap south 2 ap southeast 1 ap southeast 2 ca central 1 eu central 1 eu north 1 eu south 1 eu west 1 eu west 2 eu west 3 me south 1 sa east 1 us east 1 us east 2 us west 1 us west 2 The extension authenticates with Vault using AWS IAM auth vault docs auth aws and all configuration is supplied via environment variables There are two methods to read secrets which can both be used side by side Recommended Make unauthenticated requests to the extension s local proxy server at http 127 0 0 1 8200 which will add an authentication header and proxy to the configured VAULT ADDR Responses from Vault are returned without modification Configure environment variables such as VAULT SECRET PATH for the extension to read a secret and write it to disk Adding the extension to your existing lambda and Vault infrastructure Requirements ARN of the role your Lambda runs as An instance of Vault accessible from AWS Lambda An authenticated vault client A secret in Vault that you want your Lambda to access and a policy giving read access to it Your Lambda function must use one of the supported runtimes https docs aws amazon com lambda latest dg runtimes extensions api html for extensions Step 1 configure Vault Enable the aws auth method shell session vault auth enable aws Configure the AWS client to use the default options shell session vault write force auth aws config client Create a role prefixed with the AWS environment name shell session vault write auth aws role vault lambda role auth type iam bound iam principal arn YOUR ARN policies YOUR POLICY ttl 1h Step 2 option a install the extension for lambda functions packaged in zip archives If you deploy your Lambda function as a zip file you can add the extension to your Lambda layers using the console or cli https docs aws amazon com lambda latest dg configuration layers html configuration layers using text arn aws lambda your region 634166935893 layer vault lambda extension 11 Step 2 option b install the extension for lambda functions packaged in container images Alternatively if you deploy your Lambda function as a container image simply place the built binary in the opt extensions directory of your image Fetch the binary from releases hashicorp com https releases hashicorp com vault lambda extension The following command requires cURL shell session curl silent https releases hashicorp com vault lambda extension 0 5 0 vault lambda extension 0 5 0 linux amd64 zip output vault lambda extension zip Unzip the downloaded binary shell session unzip vault lambda extension zip Optionally you can verify the integrity of the downloaded zip using the release archive checksum verification instructions here https www hashicorp com security Or to build the binary from source This requires Golang installed Run from the root of this repository shell session GOOS linux GOARCH amd64 go build o vault lambda extension main go Step 3 configure vault lambda extension Configure the extension using Lambda environment variables https docs aws amazon com lambda latest dg configuration envvars html Set the Vault API address shell session VAULT ADDR http vault example com 8200 Set the AWS IAM auth mount point i e the path segment after auth from above shell session VAULT AUTH PROVIDER aws Set the Vault role to authenticate as Must be configured for the ARN of your Lambda s role shell session VAULT AUTH ROLE vault lambda role The path to a secret in Vault Can be static or dynamic Unless VAULT SECRET FILE is specified JSON response will be written to tmp vault secret json shell session VAULT SECRET PATH secret lambda app token If everything is correctly set up your Lambda function can then read secret material from tmp vault secret json The exact contents of the JSON object will depend on the secret read but its schema is the Secret struct https github com hashicorp vault blob api v1 0 4 api secret go L15 from the Vault API module Alternatively you can send normal Vault API requests over HTTP to the local proxy at http 127 0 0 1 8200 and the extension will add authentication before forwarding the request Vault responses will be returned unmodified Although local communication is over plain HTTP the proxy server will use TLS to communicate with Vault if configured to do so as detailed below Configuration The extension is configured via Lambda environment variables https docs aws amazon com lambda latest dg configuration envvars html Most of the Vault CLI client s environment variables vault docs commands environment variables are available as well as some additional variables to configure auth which secret s to read and where to write secrets Environment variable Description Required Example value VLE VAULT ADDR Vault address to connect to Takes precedence over VAULT ADDR so that clients of the proxy server can be configured using the standard VAULT ADDR No https x x x x 8200 VAULT ADDR Vault address to connect to if VLE VAULT ADDR is not set Required if VLE VAULT ADDR is not set No https x x x x 8200 VAULT AUTH PROVIDER Name of the configured AWS IAM auth route on Vault Yes aws VAULT AUTH ROLE Vault role to authenticate as Yes lambda app VAULT IAM SERVER ID Value to pass to the Vault server via the X Vault AWS IAM Server ID HTTP Header for AWS Authentication vault api docs auth aws iam server id header value No vault example com VAULT SECRET PATH Secret path to read written to tmp vault secret json unless VAULT SECRET FILE is specified No database creds lambda app VAULT SECRET FILE Path to write the JSON response for VAULT SECRET PATH No tmp db json VAULT SECRET PATH FOO Additional secret path to read where FOO can be any name as long as a matching VAULT SECRET FILE FOO is specified No secret lambda app token VAULT SECRET FILE FOO Must exist for any correspondingly named VAULT SECRET PATH FOO Name has no further effect beyond matching to the correct path variable No tmp token VAULT RUN MODE Available options are default proxy and file Proxy mode makes requests to the extension s local proxy server File mode configures the extension to read and write secrets to disk Default mode uses both file and proxy mode The default is default No default VAULT TOKEN EXPIRY GRACE PERIOD Period at the end of the proxy server s auth token TTL where it will consider the token expired and attempt to re authenticate to Vault Must have a unit and be parseable by time Duration Defaults to 10s No 1m VAULT STS ENDPOINT REGION The region of the STS regional endpoint to authenticate with If the AWS IAM auth mount specified uses a regional STS endpoint then this needs to match the region of that endpoint Defaults to using the global endpoint or the region the Lambda resides in if AWS STS REGIONAL ENDPOINTS is set to regional No eu west 1 The remaining environment variables are not required and function exactly as described in the Vault Commands CLI vault docs commands environment variables documentation However note that VAULT CLIENT TIMEOUT cannot extend the timeout beyond the 10s initialization timeout imposed by the Extensions API when writing files to disk Environment variable Description Required Example value VAULT CACERT Path to a PEM encoded CA certificate file on the local disk No tmp ca crt VAULT CAPATH Path to a directory of PEM encoded CA certificate files on the local disk No tmp certs VAULT CLIENT CERT Path to a PEM encoded client certificate on the local disk No tmp client crt VAULT CLIENT KEY Path to an unencrypted PEM encoded private key on disk which corresponds to the matching client certificate No tmp client key VAULT CLIENT TIMEOUT Timeout for Vault requests Default value is 60s Ignored by proxy server Any value over 10s will exceed the Extensions API timeout and therefore have no effect No 5s VAULT MAX RETRIES Maximum number of retries on 5xx error codes Defaults to 2 Ignored by proxy server No 2 VAULT SKIP VERIFY Do not verify Vault s presented certificate before communicating with it Setting this variable is not recommended and voids Vault s security model vault docs internals security No true VAULT TLS SERVER NAME Name to use as the SNI host when connecting via TLS No vault example com VAULT RATE LIMIT Only applies to a single invocation of the extension See Vault Commands CLI vault docs commands environment variables documentation for details Ignored by proxy server No 10 VAULT NAMESPACE The namespace to use for pre configured secrets Ignored by proxy server No education VAULT DEFAULT CACHE TTL The time to live configuration aka TTL of the cache used by proxy server Must have a unit and be parsable as a time Duration Required for caching to be enabled No 15m VAULT DEFAULT CACHE ENABLED Enable caching for all requests without needing to set the X Vault Cache Control header for each request Must be set to a boolean value No true VAULT ASSUMED ROLE ARN Valid ARN of an IAM role that can be assumed by the execution role assigned to your Lambda function No arn aws iam 123456789012 role xaccounts3access VAULT LOG LEVEL Log verbosity level one of TRACE DEBUG INFO WARN ERROR OFF Defaults to INFO No DEBUG AWS STS client configuration In addition to Vault configuration you can configure certain aspects of the STS client the extension uses through the usual AWS environment variables For example if your Vault instance s IAM auth is configured to use regional STS endpoints shell session vault write auth aws config client sts endpoint https sts eu west 1 amazonaws com sts region eu west 1 Then you may need to configure the extension s STS client to also use the regional STS endpoint by setting AWS STS REGIONAL ENDPOINTS regional because both the AWS Golang SDK and Vault IAM auth method default to using the global endpoint in many regions See documentation on sts regional endpoints https docs aws amazon com credref latest refdocs setting global sts regional endpoints html for more information Caching Caching can be configured for the extension s local proxy server so that it does not forward every HTTP request to Vault The main consideration behind caching design is to make caching an explicit opt in at the request level so that it is only enabled for scenarios where caching makes sense without negative impact in others To turn on caching set the environment variable VAULT DEFAULT CACHE TTL to a valid value that is parsable as a time Duration in Go for example 15m 1h 2m3s or 1h2m3s depending on application needs An invalid or negative value will be treated the same as a missing value in which case caching will not be set up and enabled Then requests with HTTP method of GET and the HTTP header X Vault Cache Control cache will be returned directly from the cache if there s a cache hit On a cache miss the request will be forwarded to Vault and the response returned and cached If the header is set to X Vault Cache Control recache the cache lookup will be skipped and the request will be forwarded to Vault and the response returned and cached Currently the cache key is a hash of the request URL path headers body and token Warning title Nonstandard distributed tracing headers may negate the cache The Vault Lambda Extension cache key includes headers from proxy requests but excludes the standard distributed tracing headers traceparent and tracestate because trace IDs are unique per request and would lead to unique hashes for repeated requests Some distributed tracing tools may add nonstandard tracing headers which can also lead to individualized hashes that make repeated requests unique and cause cache misses Warning Caching may also be enabled for all requests by setting the environment variable VAULT DEFAULT CACHE ENABLED to true Then all requests will be fetched and or cached as though the header X Vault Cache Control cache was present Setting the header to nocache on a request will opt out of caching entirely in this configuration Setting the header to recache will skip the cache lookup and return and cache the response from Vault as described previously Warning The Vault Lambda Extension s cache is only in memory and will not be persisted when the Lambda execution environment shuts down In order words the cache TTL is capped to the duration of the Lambda execution environment Limitations Secrets written to disk or returned from the proxy server will not be automatically refreshed when they expire This is particularly important if you configure the extension to write secrets to disk because the extension will only write to disk once per execution environment rather than once per function invocation If you use provisioned concurrency https docs aws amazon com lambda latest dg configuration concurrency html configuration concurrency provisioned or if your Lambda is invoked often enough that execution contexts live beyond the lifetime of the secret then secrets on disk are likely to become invalid In line with Lambda best practices https docs aws amazon com lambda latest dg best practices html we recommend avoiding writing secrets to disk where possible and exclusively consuming secrets via the proxy server However the proxy server will still not perform any additional processing with returned secrets such as automatic lease renewal The proxy server s own Vault auth token is the only thing that gets automatically refreshed It will synchronously refresh its own token before proxying requests if the token is expired including a grace window and it will attempt to renew its token if the token is nearly expired but renewable The proxy will also immediately refresh its token if the incoming request header X Vault Token Options revoke is present Note title Not SnapStart compatible The Vault Lambda extension does not currently work with AWS SnapStart https docs aws amazon com lambda latest dg snapstart html Note Performance impact AWS Lambda pricing is based on number of invocations time of execution and memory used https aws amazon com lambda pricing The following table details some approximate performance related statistics to help assess the cost impact of this extension Note that AWS Lambda allocates CPU power in proportion to memory https docs aws amazon com lambda latest dg configuration memory html so results will vary widely These benchmarks were run with the minimum 128MB of memory allocated so aim to give an approximate baseline Metric Value Description Derivation Layer size 8 5MB The size of the unpacked extension binary ls la Init latency 8 5ms standard deviation 2 4ms one network round trip to authenticate to Vault Extension initialization time in a new execution environment Authentication round trip time will be highly deployment dependent Instrumented in code Invoke latency 1ms The base processing time for each function invocation assuming no calls to the proxy server Instrumented in code Memory impact 12MB The marginal impact on Max Memory Used when running the extension As reported by Lambda when running Hello World function with and without extension Uploading to your own AWS account and region If you would like to upload the extension as a Lambda layer in your own AWS account and region you can do the following shell session curl silent https releases hashicorp com vault lambda extension 0 5 0 vault lambda extension 0 5 0 linux amd64 zip output vault lambda extension zip Set your target AWS region shell session export REGION YOUR REGION HERE Upload the extension as a Lambda layer shell session aws lambda publish layer version layer name vault lambda extension zip file fileb vault lambda extension zip region REGION Tutorial For step by step instructions refer to the Vault AWS Lambda Extension vault tutorials app integration aws lambda tutorial for details on how to create an AWS Lambda function and use the Vault Lambda Extension to authenticate with Vault |
vault page title Configure Vault ServiceNow Credential Resolver MID server properties Configuring the Vault credential resolver layout docs This section documents the configurables for the Vault ServiceNow Credential Resolver | ---
layout: docs
page_title: Configure Vault ServiceNow Credential Resolver
description: This section documents the configurables for the Vault ServiceNow Credential Resolver.
---
# Configuring the Vault credential resolver
## MID server properties
The following [properties] are supported by the Vault Credential Resolver:
- `mid.external_credentials.vault.address` `(string: "")` - Address of Vault Agent as resolveable by the MID server.
For example, if Vault Agent is on the same server as the MID server it could be `https://127.0.0.1:8200`.
- `mid.external_credentials.vault.ca` `(string: "")` - The CA certificate to trust for TLS in PEM format. If unset,
the system's trusted CAs will be used.
- `mid.external_credentials.vault.tls_skip_verify` `(string: "")` - When set to true, skips verification of the Vault server
TLS certificiate. Setting this to true is not recommended for production.
[properties]: https://docs.servicenow.com/bundle/quebec-servicenow-platform/page/product/mid-server/reference/r_MIDServerProperties.html#t_SetMIDServerProperties
## Configuring discovery credentials
To consume Vault credentials from your MID server, you will need to:
* Create a secret in Vault
* Configure the resolver to use that secret
### Creating a secret in Vault
The credential resolver supports reading credentials from the following secret engines:
* [Active Directory](/vault/docs/secrets/ad)
* [AD/OpenLDAP](/vault/docs/secrets/ldap)
* [AWS](/vault/docs/secrets/aws)
* [KV v1](/vault/docs/secrets/kv/kv-v1)
* [KV v2](/vault/docs/secrets/kv/kv-v2)
When creating KV secrets, you must use the following keys for each component
to ensure it is correctly mapped to ServiceNow's credential fields:
Key | Description | Supported aliases
------------|----------------------------------------|------------------
username | The username | access_key
password | The password | secret_key, current_password
private_key | The private SSH key |
passphrase | The passphrase for the private SSH key |
Most ServiceNow credential types will expect at least a username and either
a password or a private key. To help surface errors early, the credential
resolver validates that a username and password are present for:
* aws
* basic
* jdbc
* jms
* ssh_password
* vmware
* windows
The credential resolver expects the following types to specify at least
a username and a private key:
* api_key
* cfg_chef_credentials
* infoblox
* sn_cfg_ansible
* sn_disco_certmgmt_certificate_ca
* ssh_private_key
For SNMPv3 credentials, the credential resolver can accept up to five values:
* username
* auth-protocol
* auth-key
* privacy-protocol
* privacy-key
Depending on the configuration of the SNMP endpoint, the username at least will always be required. See below for different SNMP endpoint configurations:
Level | Authentication | Encryption | What Happens
--------------|----------------|------------|------------------------
noAuthNoPriv | Username | None | Username match for auth
authNoPriv | MD5 or SHA | None | Auth based on HMAC-MD5 or HMAC-SHA algorithms
authPriv | MD5 or SHA | DES | Auth based on HMAC-MD5 or HMAC-SHA algorithms; provides DES 56-bit encryption based on (CBC)-DES (DES-56)
### Configuring the resolver to use a secret
<ImageConfig hideBorder caption="Vault credential resolver">

</ImageConfig>
In the ServiceNow UI:
1. Navigate to "Discovery - Credentials → New".
1. Choose a type from the list.
1. Select "External credential store".
1. Provide a fully qualified collection name (FQCN):
- **Xanadu (Q4-2024) or newer**: use `com.snc.discovery.CredentialResolver`
- **Versions prior to Xanadu (Q4-2024)**: leave blank or use "None"
1. Provide a meaningful name for the resolver.
1. Set "Credential ID" to the
[ReadSecretVersion endpoint](/vault/api-docs/secret/kv/kv-v2#read-secret-version)
of your secrets plugin and credential. For example, the endpoint
for a secret stored on the path `ssh` under a KV v2 secret engine mounted at
`secret` is `/secret/data/ssh`.
1. Click "Test credential" then select a MID server and target to test your
configuration. | vault | layout docs page title Configure Vault ServiceNow Credential Resolver description This section documents the configurables for the Vault ServiceNow Credential Resolver Configuring the Vault credential resolver MID server properties The following properties are supported by the Vault Credential Resolver mid external credentials vault address string Address of Vault Agent as resolveable by the MID server For example if Vault Agent is on the same server as the MID server it could be https 127 0 0 1 8200 mid external credentials vault ca string The CA certificate to trust for TLS in PEM format If unset the system s trusted CAs will be used mid external credentials vault tls skip verify string When set to true skips verification of the Vault server TLS certificiate Setting this to true is not recommended for production properties https docs servicenow com bundle quebec servicenow platform page product mid server reference r MIDServerProperties html t SetMIDServerProperties Configuring discovery credentials To consume Vault credentials from your MID server you will need to Create a secret in Vault Configure the resolver to use that secret Creating a secret in Vault The credential resolver supports reading credentials from the following secret engines Active Directory vault docs secrets ad AD OpenLDAP vault docs secrets ldap AWS vault docs secrets aws KV v1 vault docs secrets kv kv v1 KV v2 vault docs secrets kv kv v2 When creating KV secrets you must use the following keys for each component to ensure it is correctly mapped to ServiceNow s credential fields Key Description Supported aliases username The username access key password The password secret key current password private key The private SSH key passphrase The passphrase for the private SSH key Most ServiceNow credential types will expect at least a username and either a password or a private key To help surface errors early the credential resolver validates that a username and password are present for aws basic jdbc jms ssh password vmware windows The credential resolver expects the following types to specify at least a username and a private key api key cfg chef credentials infoblox sn cfg ansible sn disco certmgmt certificate ca ssh private key For SNMPv3 credentials the credential resolver can accept up to five values username auth protocol auth key privacy protocol privacy key Depending on the configuration of the SNMP endpoint the username at least will always be required See below for different SNMP endpoint configurations Level Authentication Encryption What Happens noAuthNoPriv Username None Username match for auth authNoPriv MD5 or SHA None Auth based on HMAC MD5 or HMAC SHA algorithms authPriv MD5 or SHA DES Auth based on HMAC MD5 or HMAC SHA algorithms provides DES 56 bit encryption based on CBC DES DES 56 Configuring the resolver to use a secret ImageConfig hideBorder caption Vault credential resolver Partial screenshot of the ServiceNow UI showing the search dialog for adding a Vault configuration by name img service now vault credential resolver fqcn png ImageConfig In the ServiceNow UI 1 Navigate to Discovery Credentials rarr New 1 Choose a type from the list 1 Select External credential store 1 Provide a fully qualified collection name FQCN Xanadu Q4 2024 or newer use com snc discovery CredentialResolver Versions prior to Xanadu Q4 2024 leave blank or use None 1 Provide a meaningful name for the resolver 1 Set Credential ID to the ReadSecretVersion endpoint vault api docs secret kv kv v2 read secret version of your secrets plugin and credential For example the endpoint for a secret stored on the path ssh under a KV v2 secret engine mounted at secret is secret data ssh 1 Click Test credential then select a MID server and target to test your configuration |
vault Installation steps for the Vault ServiceNow Credential Resolver page title Install Vault ServiceNow Credential Resolver Installing the Vault credential resolver layout docs Prerequisites | ---
layout: docs
page_title: Install Vault ServiceNow Credential Resolver
description: Installation steps for the Vault ServiceNow Credential Resolver.
---
# Installing the Vault credential resolver
## Prerequisites
* ServiceNow version Quebec+ (untested on previous versions)
* MID server version Quebec+ (untested on previous versions)
* Discovery and external credential plugins activated on ServiceNow
* Working Vault deployment accessible from the MID server
## Installing Vault agent
* Select your desired auth method from Agent's [supported auth methods](/vault/docs/agent-and-proxy/autoauth/methods)
and set it up in Vault
* For example, to set up AppRole auth and a role called `role1` with the `demo` policy attached:
```bash
vault auth enable approle
vault policy write demo - <<EOF
path "secret/*" {
capabilities = ["read"]
}
EOF
vault write auth/approle/role/role1 bind_secret_id=true token_policies=demo
```
* To get the files required for the example Agent config below, you can then
run:
```bash
echo -n $(vault read -format json auth/approle/role/role1/role-id | jq -r '.data.role_id') > /path/to/roleID
echo -n $(vault write -format json -f auth/approle/role/role1/secret-id | jq -r '.data.secret_id') > /path/to/secretID
```
* Create an `agent.hcl` config file. Your exact configuration may vary, but you
must set `cache.use_auto_auth_token = true`, and the `listener`, `vault` and
`auto_auth` blocks are also required to set up a working Agent, e.g.:
```hcl
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = false
tls_cert_file = "/path/to/cert.pem"
tls_key_file = "/path/to/key.pem"
}
cache {
use_auto_auth_token = true
}
vault {
address = "http://vault.example.com:8200"
}
auto_auth {
method {
type = "approle"
config = {
role_id_file_path = "/path/to/roleID"
secret_id_file_path = "/path/to/secretID"
remove_secret_id_file_after_reading = false
}
}
}
```
* Install Vault Agent as a service running `vault agent -config=/path/to/agent.hcl`
* Documentation for Windows service installation [here](/vault/docs/agent-and-proxy/agent/winsvc)
## Uploading JAR file to MID server
<Warning heading="Use the ServiceNow app store to install Vault Credential Resolver">
The steps documented below are for **pre ServiceNow UTAH versions**.
As of ServiceNow version UTAH, use the "HashiCorp Vault Credential Resolver" App
from the ServiceNow App store to install the Vault Credential Resolver and verify
the jar file installed is `vault-servicenow-credential-resolver`. If you wish to
use a custom name, you must manually rename the deployed jar.
</Warning>
* Download the latest version of the Vault Credential Resolver JAR file from
[releases.hashicorp.com](https://releases.hashicorp.com/vault-servicenow-credential-resolver/)
* In ServiceNow, navigate to "MID server - JAR files" -> New
* Manage Attachments -> upload Vault Credential Resolver JAR
* Fill in name, version etc as desired
* Click Submit
* Navigate to "MID server - Properties" -> New
* Set Name: `mid.external_credentials.vault.address`, Value: Address of Vault
Agent listener from previous step, e.g. `http://127.0.0.1:8200`
* **Optional:** Set the property `mid.external_credentials.vault.ca` to the
trusted CA in PEM format if using TLS between the MID server and Vault
Agent with a self-signed certificate.
## Next steps
See [configuration](/vault/docs/platform/servicenow/configuration) for details on
configuring the resolver and using credentials for discovery. | vault | layout docs page title Install Vault ServiceNow Credential Resolver description Installation steps for the Vault ServiceNow Credential Resolver Installing the Vault credential resolver Prerequisites ServiceNow version Quebec untested on previous versions MID server version Quebec untested on previous versions Discovery and external credential plugins activated on ServiceNow Working Vault deployment accessible from the MID server Installing Vault agent Select your desired auth method from Agent s supported auth methods vault docs agent and proxy autoauth methods and set it up in Vault For example to set up AppRole auth and a role called role1 with the demo policy attached bash vault auth enable approle vault policy write demo EOF path secret capabilities read EOF vault write auth approle role role1 bind secret id true token policies demo To get the files required for the example Agent config below you can then run bash echo n vault read format json auth approle role role1 role id jq r data role id path to roleID echo n vault write format json f auth approle role role1 secret id jq r data secret id path to secretID Create an agent hcl config file Your exact configuration may vary but you must set cache use auto auth token true and the listener vault and auto auth blocks are also required to set up a working Agent e g hcl listener tcp address 127 0 0 1 8200 tls disable false tls cert file path to cert pem tls key file path to key pem cache use auto auth token true vault address http vault example com 8200 auto auth method type approle config role id file path path to roleID secret id file path path to secretID remove secret id file after reading false Install Vault Agent as a service running vault agent config path to agent hcl Documentation for Windows service installation here vault docs agent and proxy agent winsvc Uploading JAR file to MID server Warning heading Use the ServiceNow app store to install Vault Credential Resolver The steps documented below are for pre ServiceNow UTAH versions As of ServiceNow version UTAH use the HashiCorp Vault Credential Resolver App from the ServiceNow App store to install the Vault Credential Resolver and verify the jar file installed is vault servicenow credential resolver If you wish to use a custom name you must manually rename the deployed jar Warning Download the latest version of the Vault Credential Resolver JAR file from releases hashicorp com https releases hashicorp com vault servicenow credential resolver In ServiceNow navigate to MID server JAR files New Manage Attachments upload Vault Credential Resolver JAR Fill in name version etc as desired Click Submit Navigate to MID server Properties New Set Name mid external credentials vault address Value Address of Vault Agent listener from previous step e g http 127 0 0 1 8200 Optional Set the property mid external credentials vault ca to the trusted CA in PEM format if using TLS between the MID server and Vault Agent with a self signed certificate Next steps See configuration vault docs platform servicenow configuration for details on configuring the resolver and using credentials for discovery |
vault This guide assumes you are installing the Vault EKM Provider for the first time page title Install the Vault EKM Provider layout docs For upgrade instructions see upgrading vault docs platform mssql upgrading Installation steps for the Vault EKM Provider for Microsoft SQL Server Installing the Vault EKM provider | ---
layout: docs
page_title: Install the Vault EKM Provider
description: Installation steps for the Vault EKM Provider for Microsoft SQL Server.
---
# Installing the Vault EKM provider
This guide assumes you are installing the Vault EKM Provider for the first time.
For upgrade instructions, see [upgrading](/vault/docs/platform/mssql/upgrading).
## Prerequisites
* Vault Enterprise server 1.9+ with a license for the Advanced Data Protection Key Management module
* Microsoft Windows Server operating system
* Microsoft SQL Server 2012 or newer for Windows (Windows SQL Server Express and SQL Server for Linux [does not support EKM][linux-ekm])
* An authenticated Vault client
To check your Vault version and license, you can run:
```bash
vault status
vault license get -format=json
```
The list of features should include "Key Management Transparent Data Encryption".
[linux-ekm]: https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-editions-and-components-2019?view=sql-server-ver15#Unsupported
## Installing the Vault EKM provider
## Configuring Vault
The EKM provider requires AppRole auth and the Transit secret engine to be setup
on the Vault server. The steps below can be used to configure Vault ready for the
EKM provider to use it.
-> **Note:** rsa-2048 is currently the only supported key type.
1. Set up AppRole auth:
```bash
vault auth enable approle
vault write auth/approle/role/ekm-encryption-key-role \
token_ttl=20m \
max_token_ttl=30m \
token_policies=tde-policy
```
-> **Note:** After authenticating to Vault with the AppRole, the EKM provider
will re-use the token it receives until it expires, at which point it will
authenticate using the AppRole credentials again; it will not attempt to renew
its token. The example AppRole configuraiton here will work for this, but keep
that in mind if you choose to use a different AppRole configuration.
1. Retrieve the AppRole ID and secret ID for use later when configuring SQL Server:
```bash
vault read auth/approle/role/ekm-encryption-key-role/role-id
vault write -f auth/approle/role/ekm-encryption-key-role/secret-id
```
1. Enable the transit secret engine and create a key:
```bash
vault secrets enable transit
vault write -f transit/keys/ekm-encryption-key type="rsa-2048"
```
1. Create a policy for the Vault EKM provider to use. The following policy has
the minimum required permissions:
```bash
vault policy write tde-policy -<<EOF
path "transit/keys/ekm-encryption-key" {
capabilities = ["create", "read", "update", "delete"]
}
path "transit/keys" {
capabilities = ["list"]
}
path "transit/encrypt/ekm-encryption-key" {
capabilities = ["update"]
}
path "transit/decrypt/ekm-encryption-key" {
capabilities = ["update"]
}
path "sys/license/status" {
capabilities = ["read"]
}
EOF
```
## Configuring SQL server
The remaining steps are all run on the database server.
### Install the EKM provider on the server
1. Download and run the latest Vault EKM provider installer from
[releases.hashicorp.com](https://releases.hashicorp.com/vault-mssql-ekm-provider/)
1. Enter your Vault server's address when prompted and complete the installer
1. If you need to configure non-default namespace or mount paths for your AppRole and
Transit engines, see [configuration](/vault/docs/platform/mssql/configuration).
### Configure the EKM provider using SQL
Open Microsoft SQL Server Management Studio, and run the queries below to complete
installation.
1. Enable the EKM feature and create a cryptographic provider using the folder
you just installed the EKM provider into.
```sql
-- Enable advanced options
USE master;
GO
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
-- Enable EKM provider
EXEC sp_configure 'EKM provider enabled', 1;
GO
RECONFIGURE;
GO
CREATE CRYPTOGRAPHIC PROVIDER TransitVaultProvider
FROM FILE = 'C:\Program Files\HashiCorp\Transit Vault EKM Provider\TransitVaultEKM.dll'
GO
```
1. Next, create credentials for an admin to use EKM with your AppRole role and
secret ID from above:
```sql
-- Replace <approle-role-id> and <approle-secret-id> with the values from
-- the earlier vault commands:
-- vault read auth/approle/role/ekm-encryption-key/role-id
-- vault write -f auth/approle/role/ekm-encryption-key/secret-id
CREATE CREDENTIAL TransitVaultCredentials
WITH IDENTITY = '<approle-role-id>',
SECRET = '<approle-secret-id>'
FOR CRYPTOGRAPHIC PROVIDER TransitVaultProvider;
GO
-- Replace <domain>\<login> with the SQL Server administrator's login
ALTER LOGIN "<domain>\<login>" ADD CREDENTIAL TransitVaultCredentials;
```
1. You can now create an asymmetric key using the transit key set up earlier:
```sql
CREATE ASYMMETRIC KEY TransitVaultAsymmetric
FROM PROVIDER TransitVaultProvider
WITH
CREATION_DISPOSITION = OPEN_EXISTING,
PROVIDER_KEY_NAME = 'ekm-encryption-key';
```
-> **Note:** This is the first step at which the EKM provider will communicate with Vault. If
Vault is misconfigured, this step is likely to fail. See
[troubleshooting](/vault/docs/platform/mssql/troubleshooting) for tips on specific error codes.
1. Create another login from the new asymmetric key:
```sql
-- Replace <approle-role-id> and <approle-secret-id> with the values from
-- the earlier vault commands again
CREATE CREDENTIAL TransitVaultTDECredentials
WITH IDENTITY = '<approle-role-id>',
SECRET = '<approle-secret-id>'
FOR CRYPTOGRAPHIC PROVIDER TransitVaultProvider;
GO
CREATE LOGIN TransitVaultTDELogin
FROM ASYMMETRIC KEY TransitVaultAsymmetric;
GO
ALTER LOGIN TransitVaultTDELogin
ADD CREDENTIAL TransitVaultTDECredentials;
GO
```
1. Finally, you can enable TDE and protect the database encryption key with
the asymmetric key managed by Vault's Transit secret engine:
```sql
CREATE DATABASE TestTDE
GO
USE TestTDE;
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER ASYMMETRIC KEY TransitVaultAsymmetric;
GO
ALTER DATABASE TestTDE
SET ENCRYPTION ON;
GO
```
1. Check the status of database encryption using the following queries:
```sql
SELECT * FROM sys.dm_database_encryption_keys;
SELECT (SELECT name FROM sys.databases WHERE database_id = k.database_id) as name,
encryption_state, key_algorithm, key_length,
encryptor_type, encryption_state_desc, encryption_scan_state_desc FROM sys.dm_database_encryption_keys k;
```
## Key rotation
See [key rotation](/vault/docs/platform/mssql/rotation) for guidance on rotating
the encryption keys. | vault | layout docs page title Install the Vault EKM Provider description Installation steps for the Vault EKM Provider for Microsoft SQL Server Installing the Vault EKM provider This guide assumes you are installing the Vault EKM Provider for the first time For upgrade instructions see upgrading vault docs platform mssql upgrading Prerequisites Vault Enterprise server 1 9 with a license for the Advanced Data Protection Key Management module Microsoft Windows Server operating system Microsoft SQL Server 2012 or newer for Windows Windows SQL Server Express and SQL Server for Linux does not support EKM linux ekm An authenticated Vault client To check your Vault version and license you can run bash vault status vault license get format json The list of features should include Key Management Transparent Data Encryption linux ekm https docs microsoft com en us sql linux sql server linux editions and components 2019 view sql server ver15 Unsupported Installing the Vault EKM provider Configuring Vault The EKM provider requires AppRole auth and the Transit secret engine to be setup on the Vault server The steps below can be used to configure Vault ready for the EKM provider to use it Note rsa 2048 is currently the only supported key type 1 Set up AppRole auth bash vault auth enable approle vault write auth approle role ekm encryption key role token ttl 20m max token ttl 30m token policies tde policy Note After authenticating to Vault with the AppRole the EKM provider will re use the token it receives until it expires at which point it will authenticate using the AppRole credentials again it will not attempt to renew its token The example AppRole configuraiton here will work for this but keep that in mind if you choose to use a different AppRole configuration 1 Retrieve the AppRole ID and secret ID for use later when configuring SQL Server bash vault read auth approle role ekm encryption key role role id vault write f auth approle role ekm encryption key role secret id 1 Enable the transit secret engine and create a key bash vault secrets enable transit vault write f transit keys ekm encryption key type rsa 2048 1 Create a policy for the Vault EKM provider to use The following policy has the minimum required permissions bash vault policy write tde policy EOF path transit keys ekm encryption key capabilities create read update delete path transit keys capabilities list path transit encrypt ekm encryption key capabilities update path transit decrypt ekm encryption key capabilities update path sys license status capabilities read EOF Configuring SQL server The remaining steps are all run on the database server Install the EKM provider on the server 1 Download and run the latest Vault EKM provider installer from releases hashicorp com https releases hashicorp com vault mssql ekm provider 1 Enter your Vault server s address when prompted and complete the installer 1 If you need to configure non default namespace or mount paths for your AppRole and Transit engines see configuration vault docs platform mssql configuration Configure the EKM provider using SQL Open Microsoft SQL Server Management Studio and run the queries below to complete installation 1 Enable the EKM feature and create a cryptographic provider using the folder you just installed the EKM provider into sql Enable advanced options USE master GO EXEC sp configure show advanced options 1 GO RECONFIGURE GO Enable EKM provider EXEC sp configure EKM provider enabled 1 GO RECONFIGURE GO CREATE CRYPTOGRAPHIC PROVIDER TransitVaultProvider FROM FILE C Program Files HashiCorp Transit Vault EKM Provider TransitVaultEKM dll GO 1 Next create credentials for an admin to use EKM with your AppRole role and secret ID from above sql Replace approle role id and approle secret id with the values from the earlier vault commands vault read auth approle role ekm encryption key role id vault write f auth approle role ekm encryption key secret id CREATE CREDENTIAL TransitVaultCredentials WITH IDENTITY approle role id SECRET approle secret id FOR CRYPTOGRAPHIC PROVIDER TransitVaultProvider GO Replace domain login with the SQL Server administrator s login ALTER LOGIN domain login ADD CREDENTIAL TransitVaultCredentials 1 You can now create an asymmetric key using the transit key set up earlier sql CREATE ASYMMETRIC KEY TransitVaultAsymmetric FROM PROVIDER TransitVaultProvider WITH CREATION DISPOSITION OPEN EXISTING PROVIDER KEY NAME ekm encryption key Note This is the first step at which the EKM provider will communicate with Vault If Vault is misconfigured this step is likely to fail See troubleshooting vault docs platform mssql troubleshooting for tips on specific error codes 1 Create another login from the new asymmetric key sql Replace approle role id and approle secret id with the values from the earlier vault commands again CREATE CREDENTIAL TransitVaultTDECredentials WITH IDENTITY approle role id SECRET approle secret id FOR CRYPTOGRAPHIC PROVIDER TransitVaultProvider GO CREATE LOGIN TransitVaultTDELogin FROM ASYMMETRIC KEY TransitVaultAsymmetric GO ALTER LOGIN TransitVaultTDELogin ADD CREDENTIAL TransitVaultTDECredentials GO 1 Finally you can enable TDE and protect the database encryption key with the asymmetric key managed by Vault s Transit secret engine sql CREATE DATABASE TestTDE GO USE TestTDE GO CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM AES 256 ENCRYPTION BY SERVER ASYMMETRIC KEY TransitVaultAsymmetric GO ALTER DATABASE TestTDE SET ENCRYPTION ON GO 1 Check the status of database encryption using the following queries sql SELECT FROM sys dm database encryption keys SELECT SELECT name FROM sys databases WHERE database id k database id as name encryption state key algorithm key length encryptor type encryption state desc encryption scan state desc FROM sys dm database encryption keys k Key rotation See key rotation vault docs platform mssql rotation for guidance on rotating the encryption keys |
vault Software Release date Mar 23 2022 Vault 1 10 0 release notes layout docs This page contains release notes for Vault 1 10 0 page title 1 10 0 | ---
layout: docs
page_title: 1.10.0
description: |-
This page contains release notes for Vault 1.10.0
---
# Vault 1.10.0 release notes
**Software Release date:** Mar 23, 2022
**Summary:** Vault version 1.10.0 offers features and enhancements that improve the user experience while closing the loop on key issues previously encountered by our customers. We are providing a summary of these improvements in these release notes.
We encourage you to upgrade to the latest release to take advantage of the new benefits that we are providing. Additionally, with this latest release, we offer solutions to critical feature gaps that have been identified previously. For further information on product improvements, including a comprehensive list of bug fixes, please refer to the [Changelog](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md) within the Vault 1.10.0 release.
Some of these enhancements and changes in this release include:
- Ability to view client counts per auth and changes to clients over months, therefore, providing more granular visibility into clients.
- Extended the `sys/remount` API endpoint to support moving secrets engines and auth method mounts from one location to another, within a namespace or across namespaces.
- Improved security posture that includes MFA on login for Vault Community Edition customers.
- Ability to implicitely achieve consistency via tokens.
- Support of PKCE on Vault’s OIDC auth method with Telemetry support for the Vault Agent.
- Improvement of key areas and parity to support using Terraform Provider with Vault.
## New features
This section describes the new features introduced as part of Vault
### Multi-Factor authentication (MFA) for Vault Community Edition
Vault has had support for the [Step-up Enterprise MFA](/vault/docs/enterprise/mfa) as part of its Enterprise edition. The Step-up Enterprise MFA allows having an MFA on login, or for step-up access to sensitive resources in Vault.
With Vault 1.10.0, MFA as part of [login](/vault/docs/auth/login-mfa) is now supported for Vault Community Edition. This demonstrates HashiCorp’s thought leadership in security and its continued endeavor to enable all Vault users to employ strong security policies with Vault.
~> **Note:** The Legacy MFA in Vault Community Edition is a [deprecated](/vault/docs/deprecation) feature and will be removed in Vault 1.11.
Refer to the [Login MFA FAQ](/vault/docs/auth/login-mfa/faq) to understand the various MFA workflows that are supported in Vault 1.10.0.
### Vault OIDC provider with PKCE support
Vault’s support to act as an OIDC provider is now generally available. Furthermore, Vault’s OIDC provider functionality can now support PKCE for authorization code flow as well. Thanks to all the excellent community feedback received, we have simplified the user experience around configuration of OIDC provider functionality.
### Caching support for Vault lambda extension
With 0.6.0, Vault Lambda Extension supports [caching](https://github.com/hashicorp/vault-lambda-extension#caching) in the local proxy server to avoid proxying every request to enable setting expiry time and invalidate cache, as needed.
### Terraform provider for Vault
We have introduced three new resources to enable configuration of the [KMIP secrets engine](https://registry.terraform.io/providers/hashicorp/vault/latest/docs/resources/kmip_secret_backend) using the Terraform Provider for Vault. In addition, frequent releases on the Terraform Provider for Vault have been incorporating the ability to configure newer resources and data sources. Please read the [documentation](https://registry.terraform.io/providers/hashicorp/vault/latest/docs) for more details.
### KV secrets engine v2 patch operations
We now support an additional method for managing [KV v2 secrets](/vault/api-docs/secret/kv/kv-v2) to maintain least privilege security in certain types of automated environments. This feature creates a new PATCH capability that enables partial updates to KV v2 secrets without requiring the READ privilege to the entire endpoint for an entity.
### DB2 dynamic secrets support
Vault operators can leverage the openldap secrets engine to manage credentials for IBM DB2 and the LDAP security plugin for Db2. This allows Db2 to offload authentication and authorization to the LDAP security plugin and allows Vault to manage static credentials or even generate dynamic users. For more details, refer to the For more details, refer to the [IBM Db2 Credentials Management](/vault/tutorials/secrets-management/ibm-db2-openldap) tutorial.
### Temporal transit key rotation
Proper key management includes occasionally rotating encryption keys to reduce the risks of a nonce reuse and opportunities for keys to be compromised. Previously, there was no automated way to rotate keys that is native to Vault. Now, we have provided a new configuration element on transit keys and tokenization transform configurations where a time interval triggers the keys to automatically rotate after the interval has lapsed.
### PKI HSM forwarding
To address security and compliance needs, customers may require that keys be either created or stored within Hardware Security Models (HSMs). Vault 1.10.0 introduces an accommodation for this requirement with regards to the PKI Secrets Engine. We now support offloading selected PKI operations to HSMs, in particular allowing customers to both generate new PKI key pairs and sign/verify some certificate workflows. All of these operations are conducted in a way that never allows the private key material to leave the secure confines of the HSM itself.
### AWS and AKV KMS forwarding
The work done above to support HSM-backed PKI operations inspired us to consider what other key possession paradigms we could support. This led us to extend the implementation to support Cloud Key Management Systems in addition to HSMs. In Vault 1.10.0, users may generate new PKI pairs and perform sign/verify certificate workflows, all with those keys never leaving the cloud KMS itself. Vault 1.10.0 provides support for AWS Key Management Service and Azure Key Vault Key Management Service.
### Server side consistent tokens
Vault’s [eventual consistency](/vault/docs/enterprise/consistency) model precludes read-after-write guarantees when clients interact with performance standbys or performance replication clusters. The [Client Controlled Consistency](/vault/docs/enterprise/consistency#vault-1-7-mitigations) mitigations supported with Vault 1.7 provide ways to achieve consistency through client modifications or by using the agent for proxied requests, which is not possible in all cases. The Server Side Consistent Tokens feature provides an implicit way to achieve consistency by embedding the minimum Write-Ahead-Log state information in the Service tokens returned from logins or token-create requests. This feature introduces changes in the token format and the new tokesn will be the default tokens starting in Vault 1.10.0. Vault 1.10.0 is backwards compatible with old tokens.
See [Replication](/vault/docs/configuration/replication), [Vault Eventual Consistency](/vault/docs/enterprise/consistency), [Upgrade to 1.10.0](/vault/docs/upgrading/upgrade-to-1.10.x) and [Server Side Consistent Token FAQ](/vault/docs/faq/ssct) to understand the various consistency options available with Vault 1.10.0 and the considerations to be aware of prior to selecting an option for your use case.
## Vault agent features
### Support for telemetry
Starting with Vault 1.10.0, the Vault Agent supports a new metrics endpoint and [Telemetry](/vault/docs/agent#telemetry-stanza) metrics around run time, authentication success, authentication failures, cache hits, cache misses, proxy succes, and proxy client errors. This Vault Agent Telemetry should greatly help with the retrieval of key operational insights for Vault Agent deployments.
### User-assigned managed identities for auto auth in Azure
With this [enhancement](/vault/docs/agent/autoauth/methods/azure), users can specify user-assigned managed identities via the `object_id` and `client_id` when configuring Vault agent auto-auth for Azure. This enables users that have more than one user-assigned managed identity associated with their VM to specify which one they'd like to use when authenticating via the Vault's Azure auth method. Note that providing these parameters is an "exclusive or" operation.
### Quit API endpoint with config
Previously, for instances where the Agent is a sidecar in a Kubernetes job and the job hangs, you must either use `shareProcessNamespace: true` for the container so that the process kill signals can be sent, or avoid the sidecar container entirely and solely rely on an init container. With this [enhancement](/vault/docs/agent#quit), we have added support for a Quit API endpoint to automatically shut down the Vault Agent, therefore eliminating the need to perform the workarounds.
## Other features and enhancements
This section describes other features and enhancements introduced as part of the Vault 1.10.0 release.
### Client count improvements
We have introduced auth mount-based attribution of clients to help better understand where clients are being used within a cluster. This is available via UI and API. This is an enhancement on top of the namespace attribution capability we introduced in Vault 1.9.
We have also introduced the ability to view changes to clients month over month via the client count API, and made other UI enhancements. Refer to [What is a Client?](/vault/docs/concepts/client-count) and [Client Count FAQ](/vault/docs/concepts/client-count/faq) for more details.
### Mount migration
We have made improvements to the `sys/remount` API endpoint to simplify the complexities of moving data, such as secret engine and authentication method configuration from one mount to another, within a namespace or across namespaces. This can help with restructuring namespaces and mounts for various reasons, including migrating mounts from root to other namespaces when transitioning to using namespaces for the first time. For step-by-step instructions, refer to the [Mount Move](/vault/tutorials/enterprise/mount-move) tutorial.
### Scaling external database plugins
Database plugins can now implement [plugin multiplexing](/vault/docs/plugins/plugin-architecture#plugin-multiplexing) which allows a single plugin process to be used for multiple database connections. Database plugin multiplexing will be enabled on the Oracle Database plugin starting in v0.6.0. We will extend this functionality to additional database plugins in subsequent releases.
Any external database plugins that want to adopt multiplexing support will have to update their main.go call from [dbplugin.Serve()](https://github.com/hashicorp/vault/blob/sdk/v0.4.1/sdk/database/dbplugin/v5/plugin_server.go#L13) to [dbplugin.ServeMultiplex()](https://github.com/hashicorp/vault/blob/sdk/v0.4.1/sdk/database/dbplugin/v5/plugin_server.go#L42). Multiplexable database plugins are compatible with older versions of Vault down to Vault 1.6. Refer to this [Oracle Database PR](https://github.com/hashicorp/vault-plugin-database-oracle/pull/74) as an example of the upgrade process.
### Consul secrets engine enhancements
Consul has supported [namespace](/consul/docs/enterprise/namespaces), [admin partitions](/consul/docs/enterprise/admin-partitions) and [ACL roles](/consul/commands/acl/role) for some time now. In this release we have added enhancements to the Consul Secrets engine to support namespace awareness and add admin partition and role support for Consul ACL tokens. This significantly simplifies the integrations for customers who want to achieve a zero trust security posture with both Vault and Consul.
### Using sessionStorage instead of localStorage for the Vault UI
Prior to Vault 1.10.0, the Vault UI used localStorage to store authentication information. The data in localStorage was persisted in browsers and removed only on demand. Now, we have switched the Vault UI to use sessionStorage instead, which ensures that the authentication information is stored in the current browser tab alone, thereby improving security.
### Advanced I/O handling for transform FPE
The Transform Secrets Engine allows users to securely encrypt data while providing control over the output format. In Vault 1.9, we introduced [additional format fields](/vault/docs/release-notes/1.9.0#advanced-i-o-handling-for-tranform-fpe-adp-transform) on the templates used for this workflow. In Vault 1.10.0, we have now added those two new fields, `encode_format` and `decode_format`, to the Create Template page on the UI under Advanced Templating.
## Breaking changes
The following section details breaking changes introduced in Vault 1.10.0.
### LDAP auth method entity alias mapping
In Vault 1.9, we added support to provide custom user filters through the [userfilter](/vault/api-docs/auth/ldap#userfilter) parameter. This support changed the way that entity alias was mapped to an entity. Prior to Vault 1.9, alias names were always based on the [login username](/vault/api-docs/auth/ldap#username-3) (which in turn is based on the value of the [userattr](/vault/api-docs/auth/ldap#userattr)). In Vault 1.9, alias names no longer mapped to the login username. Instead, the mapping depends on other config values as well, such as [updomain](/vault/api-docs/auth/ldap#upndomain), [binddn](/vault/api-docs/auth/ldap#binddn), [discoverydn](/vault/api-docs/auth/ldap#discoverdn), and [userattr](/vault/api-docs/auth/ldap#userattr).
With Vault 1.10.0, we re-introduced the option to force the alias name to map to the login username with the optional parameter username_as_alias. Users that have the LDAP auth method enabled prior to Vault 1.9 may want to consider setting this to true to revert back to the old behavior. Otherwise, depending on the other aforementioned config values, logins may generate a new and different entity for an existing user with a previous entity associated in Vault. This in turn affects client counts since there may be more than one entity tied to this user. The username_as_alias flag was also made available in subsequent Vault 1.8.x and Vault 1.9.x releases to allow for this to be set prior to a Vault 1.10.0 upgrade.
## Known issues
### Single Vault follower restart causes election even with established quorum
We now support Server Side Consistent Tokens (See [Replication](/vault/docs/configuration/replication), [Vault Eventual Consistency](/vault/docs/enterprise/consistency), and [Upgrade to 1.10.0](/vault/docs/upgrading/upgrade-to-1.10.x).), which introduces a new token format that can only be used on nodes of 1.10 or higher version. This new format is enabled by default upon upgrading to the new version. Old format tokens can be read by Vault 1.10.0, but the new format Vault 1.10 tokens cannot be read by older Vault versions.
For more details, see the [Server Side Consistent Tokens FAQ](/vault/docs/faq/ssct).
Since service tokens are always created on the leader, as long as the leader is not upgraded before performance standbys, service tokens will be of the old format and still be usable during the upgrade process. However, the usual upgrade process we recommend can't be relied upon to always upgrade the leader last. Due to this known [issue](https://github.com/hashicorp/vault/issues/14153), a Vault cluster using Integrated Storage may result in a leader not being upgraded last, and this can trigger a re-election. This re-election can cause the upgraded node to become the leader, resulting in the newly created tokens on the leader to be unusable on nodes that have not yet been upgraded. Note that this issue does not impact Vault Community Edition users.
We will have a fix for this issue in Vault 1.10.1. Until this issue is fixed, you may be at risk of having performance standbys unable to service requests until all nodes are upgraded. We recommended that you plan for a maintenance window to upgrade.
### Limited policy shows unhelpful message in UI after mounting a secret engine
When a user has a policy that allows creating a secret engine but not reading it, after successful creation, the user sees a message `n is undefined` instead of a permissions error. We will have a fix for this issue in an upcoming minor release.
### Adding/Modifying Duo MFA method for enterprise MFA triggers a panic error
When adding or modifying a Duo MFA method for step-up Enterprise MFA using the `sys/mfa/method/duo` endpoint, a panic gets triggered due to a missing schema field. We will have a fix for this in Vault 1.10.1. Until this issue is fixed, avoid making any changes to your Duo configuration if you are upgrading Vault to v1.10.0.
### Sign in to UI using OIDC auth method results in an error
Signing in to the Vault UI using an OIDC auth mount listed in the "tabs" of the form will result
in the following error: "Authentication failed: role with oidc role_type is not allowed".
The auth mounts listed in the "tabs" of the form are those that have [listing_visibility](/vault/api-docs/system/auth#listing_visibility-1)
set to `unauth`.
There is a workaround for this error that will allow you to sign in to Vault using the OIDC
auth method. Select the "Other" tab instead of selecting the specific OIDC auth mount tab.
From there, select "OIDC" from the "Method" select box and proceed to sign in to Vault.
### Error initializing raft storage type with windows
When trying to start Vault server 1.10.0 on Windows, and there is less than 100GB of free disk space, there is an initialization error with raft DB related to insufficient space on the disk. See this [issue](https://github.com/hashicorp/vault/issues/14895) for details. Windows users should wait till 1.10.1 to upgrade.
## Feature deprecations and EOL
Please refer to the [Deprecation Plans and Notice](/vault/docs/deprecation) page for up-to-date information on feature deprecations and plans. An [Feature Deprecation FAQ](/vault/docs/deprecation/faq) page is also available to address questions concerning decisions made about Vault feature deprecations. | vault | layout docs page title 1 10 0 description This page contains release notes for Vault 1 10 0 Vault 1 10 0 release notes Software Release date Mar 23 2022 Summary Vault version 1 10 0 offers features and enhancements that improve the user experience while closing the loop on key issues previously encountered by our customers We are providing a summary of these improvements in these release notes We encourage you to upgrade to the latest release to take advantage of the new benefits that we are providing Additionally with this latest release we offer solutions to critical feature gaps that have been identified previously For further information on product improvements including a comprehensive list of bug fixes please refer to the Changelog https github com hashicorp vault blob main CHANGELOG md within the Vault 1 10 0 release Some of these enhancements and changes in this release include Ability to view client counts per auth and changes to clients over months therefore providing more granular visibility into clients Extended the sys remount API endpoint to support moving secrets engines and auth method mounts from one location to another within a namespace or across namespaces Improved security posture that includes MFA on login for Vault Community Edition customers Ability to implicitely achieve consistency via tokens Support of PKCE on Vault s OIDC auth method with Telemetry support for the Vault Agent Improvement of key areas and parity to support using Terraform Provider with Vault New features This section describes the new features introduced as part of Vault Multi Factor authentication MFA for Vault Community Edition Vault has had support for the Step up Enterprise MFA vault docs enterprise mfa as part of its Enterprise edition The Step up Enterprise MFA allows having an MFA on login or for step up access to sensitive resources in Vault With Vault 1 10 0 MFA as part of login vault docs auth login mfa is now supported for Vault Community Edition This demonstrates HashiCorp s thought leadership in security and its continued endeavor to enable all Vault users to employ strong security policies with Vault Note The Legacy MFA in Vault Community Edition is a deprecated vault docs deprecation feature and will be removed in Vault 1 11 Refer to the Login MFA FAQ vault docs auth login mfa faq to understand the various MFA workflows that are supported in Vault 1 10 0 Vault OIDC provider with PKCE support Vault s support to act as an OIDC provider is now generally available Furthermore Vault s OIDC provider functionality can now support PKCE for authorization code flow as well Thanks to all the excellent community feedback received we have simplified the user experience around configuration of OIDC provider functionality Caching support for Vault lambda extension With 0 6 0 Vault Lambda Extension supports caching https github com hashicorp vault lambda extension caching in the local proxy server to avoid proxying every request to enable setting expiry time and invalidate cache as needed Terraform provider for Vault We have introduced three new resources to enable configuration of the KMIP secrets engine https registry terraform io providers hashicorp vault latest docs resources kmip secret backend using the Terraform Provider for Vault In addition frequent releases on the Terraform Provider for Vault have been incorporating the ability to configure newer resources and data sources Please read the documentation https registry terraform io providers hashicorp vault latest docs for more details KV secrets engine v2 patch operations We now support an additional method for managing KV v2 secrets vault api docs secret kv kv v2 to maintain least privilege security in certain types of automated environments This feature creates a new PATCH capability that enables partial updates to KV v2 secrets without requiring the READ privilege to the entire endpoint for an entity DB2 dynamic secrets support Vault operators can leverage the openldap secrets engine to manage credentials for IBM DB2 and the LDAP security plugin for Db2 This allows Db2 to offload authentication and authorization to the LDAP security plugin and allows Vault to manage static credentials or even generate dynamic users For more details refer to the For more details refer to the IBM Db2 Credentials Management vault tutorials secrets management ibm db2 openldap tutorial Temporal transit key rotation Proper key management includes occasionally rotating encryption keys to reduce the risks of a nonce reuse and opportunities for keys to be compromised Previously there was no automated way to rotate keys that is native to Vault Now we have provided a new configuration element on transit keys and tokenization transform configurations where a time interval triggers the keys to automatically rotate after the interval has lapsed PKI HSM forwarding To address security and compliance needs customers may require that keys be either created or stored within Hardware Security Models HSMs Vault 1 10 0 introduces an accommodation for this requirement with regards to the PKI Secrets Engine We now support offloading selected PKI operations to HSMs in particular allowing customers to both generate new PKI key pairs and sign verify some certificate workflows All of these operations are conducted in a way that never allows the private key material to leave the secure confines of the HSM itself AWS and AKV KMS forwarding The work done above to support HSM backed PKI operations inspired us to consider what other key possession paradigms we could support This led us to extend the implementation to support Cloud Key Management Systems in addition to HSMs In Vault 1 10 0 users may generate new PKI pairs and perform sign verify certificate workflows all with those keys never leaving the cloud KMS itself Vault 1 10 0 provides support for AWS Key Management Service and Azure Key Vault Key Management Service Server side consistent tokens Vault s eventual consistency vault docs enterprise consistency model precludes read after write guarantees when clients interact with performance standbys or performance replication clusters The Client Controlled Consistency vault docs enterprise consistency vault 1 7 mitigations mitigations supported with Vault 1 7 provide ways to achieve consistency through client modifications or by using the agent for proxied requests which is not possible in all cases The Server Side Consistent Tokens feature provides an implicit way to achieve consistency by embedding the minimum Write Ahead Log state information in the Service tokens returned from logins or token create requests This feature introduces changes in the token format and the new tokesn will be the default tokens starting in Vault 1 10 0 Vault 1 10 0 is backwards compatible with old tokens See Replication vault docs configuration replication Vault Eventual Consistency vault docs enterprise consistency Upgrade to 1 10 0 vault docs upgrading upgrade to 1 10 x and Server Side Consistent Token FAQ vault docs faq ssct to understand the various consistency options available with Vault 1 10 0 and the considerations to be aware of prior to selecting an option for your use case Vault agent features Support for telemetry Starting with Vault 1 10 0 the Vault Agent supports a new metrics endpoint and Telemetry vault docs agent telemetry stanza metrics around run time authentication success authentication failures cache hits cache misses proxy succes and proxy client errors This Vault Agent Telemetry should greatly help with the retrieval of key operational insights for Vault Agent deployments User assigned managed identities for auto auth in Azure With this enhancement vault docs agent autoauth methods azure users can specify user assigned managed identities via the object id and client id when configuring Vault agent auto auth for Azure This enables users that have more than one user assigned managed identity associated with their VM to specify which one they d like to use when authenticating via the Vault s Azure auth method Note that providing these parameters is an exclusive or operation Quit API endpoint with config Previously for instances where the Agent is a sidecar in a Kubernetes job and the job hangs you must either use shareProcessNamespace true for the container so that the process kill signals can be sent or avoid the sidecar container entirely and solely rely on an init container With this enhancement vault docs agent quit we have added support for a Quit API endpoint to automatically shut down the Vault Agent therefore eliminating the need to perform the workarounds Other features and enhancements This section describes other features and enhancements introduced as part of the Vault 1 10 0 release Client count improvements We have introduced auth mount based attribution of clients to help better understand where clients are being used within a cluster This is available via UI and API This is an enhancement on top of the namespace attribution capability we introduced in Vault 1 9 We have also introduced the ability to view changes to clients month over month via the client count API and made other UI enhancements Refer to What is a Client vault docs concepts client count and Client Count FAQ vault docs concepts client count faq for more details Mount migration We have made improvements to the sys remount API endpoint to simplify the complexities of moving data such as secret engine and authentication method configuration from one mount to another within a namespace or across namespaces This can help with restructuring namespaces and mounts for various reasons including migrating mounts from root to other namespaces when transitioning to using namespaces for the first time For step by step instructions refer to the Mount Move vault tutorials enterprise mount move tutorial Scaling external database plugins Database plugins can now implement plugin multiplexing vault docs plugins plugin architecture plugin multiplexing which allows a single plugin process to be used for multiple database connections Database plugin multiplexing will be enabled on the Oracle Database plugin starting in v0 6 0 We will extend this functionality to additional database plugins in subsequent releases Any external database plugins that want to adopt multiplexing support will have to update their main go call from dbplugin Serve https github com hashicorp vault blob sdk v0 4 1 sdk database dbplugin v5 plugin server go L13 to dbplugin ServeMultiplex https github com hashicorp vault blob sdk v0 4 1 sdk database dbplugin v5 plugin server go L42 Multiplexable database plugins are compatible with older versions of Vault down to Vault 1 6 Refer to this Oracle Database PR https github com hashicorp vault plugin database oracle pull 74 as an example of the upgrade process Consul secrets engine enhancements Consul has supported namespace consul docs enterprise namespaces admin partitions consul docs enterprise admin partitions and ACL roles consul commands acl role for some time now In this release we have added enhancements to the Consul Secrets engine to support namespace awareness and add admin partition and role support for Consul ACL tokens This significantly simplifies the integrations for customers who want to achieve a zero trust security posture with both Vault and Consul Using sessionStorage instead of localStorage for the Vault UI Prior to Vault 1 10 0 the Vault UI used localStorage to store authentication information The data in localStorage was persisted in browsers and removed only on demand Now we have switched the Vault UI to use sessionStorage instead which ensures that the authentication information is stored in the current browser tab alone thereby improving security Advanced I O handling for transform FPE The Transform Secrets Engine allows users to securely encrypt data while providing control over the output format In Vault 1 9 we introduced additional format fields vault docs release notes 1 9 0 advanced i o handling for tranform fpe adp transform on the templates used for this workflow In Vault 1 10 0 we have now added those two new fields encode format and decode format to the Create Template page on the UI under Advanced Templating Breaking changes The following section details breaking changes introduced in Vault 1 10 0 LDAP auth method entity alias mapping In Vault 1 9 we added support to provide custom user filters through the userfilter vault api docs auth ldap userfilter parameter This support changed the way that entity alias was mapped to an entity Prior to Vault 1 9 alias names were always based on the login username vault api docs auth ldap username 3 which in turn is based on the value of the userattr vault api docs auth ldap userattr In Vault 1 9 alias names no longer mapped to the login username Instead the mapping depends on other config values as well such as updomain vault api docs auth ldap upndomain binddn vault api docs auth ldap binddn discoverydn vault api docs auth ldap discoverdn and userattr vault api docs auth ldap userattr With Vault 1 10 0 we re introduced the option to force the alias name to map to the login username with the optional parameter username as alias Users that have the LDAP auth method enabled prior to Vault 1 9 may want to consider setting this to true to revert back to the old behavior Otherwise depending on the other aforementioned config values logins may generate a new and different entity for an existing user with a previous entity associated in Vault This in turn affects client counts since there may be more than one entity tied to this user The username as alias flag was also made available in subsequent Vault 1 8 x and Vault 1 9 x releases to allow for this to be set prior to a Vault 1 10 0 upgrade Known issues Single Vault follower restart causes election even with established quorum We now support Server Side Consistent Tokens See Replication vault docs configuration replication Vault Eventual Consistency vault docs enterprise consistency and Upgrade to 1 10 0 vault docs upgrading upgrade to 1 10 x which introduces a new token format that can only be used on nodes of 1 10 or higher version This new format is enabled by default upon upgrading to the new version Old format tokens can be read by Vault 1 10 0 but the new format Vault 1 10 tokens cannot be read by older Vault versions For more details see the Server Side Consistent Tokens FAQ vault docs faq ssct Since service tokens are always created on the leader as long as the leader is not upgraded before performance standbys service tokens will be of the old format and still be usable during the upgrade process However the usual upgrade process we recommend can t be relied upon to always upgrade the leader last Due to this known issue https github com hashicorp vault issues 14153 a Vault cluster using Integrated Storage may result in a leader not being upgraded last and this can trigger a re election This re election can cause the upgraded node to become the leader resulting in the newly created tokens on the leader to be unusable on nodes that have not yet been upgraded Note that this issue does not impact Vault Community Edition users We will have a fix for this issue in Vault 1 10 1 Until this issue is fixed you may be at risk of having performance standbys unable to service requests until all nodes are upgraded We recommended that you plan for a maintenance window to upgrade Limited policy shows unhelpful message in UI after mounting a secret engine When a user has a policy that allows creating a secret engine but not reading it after successful creation the user sees a message n is undefined instead of a permissions error We will have a fix for this issue in an upcoming minor release Adding Modifying Duo MFA method for enterprise MFA triggers a panic error When adding or modifying a Duo MFA method for step up Enterprise MFA using the sys mfa method duo endpoint a panic gets triggered due to a missing schema field We will have a fix for this in Vault 1 10 1 Until this issue is fixed avoid making any changes to your Duo configuration if you are upgrading Vault to v1 10 0 Sign in to UI using OIDC auth method results in an error Signing in to the Vault UI using an OIDC auth mount listed in the tabs of the form will result in the following error Authentication failed role with oidc role type is not allowed The auth mounts listed in the tabs of the form are those that have listing visibility vault api docs system auth listing visibility 1 set to unauth There is a workaround for this error that will allow you to sign in to Vault using the OIDC auth method Select the Other tab instead of selecting the specific OIDC auth mount tab From there select OIDC from the Method select box and proceed to sign in to Vault Error initializing raft storage type with windows When trying to start Vault server 1 10 0 on Windows and there is less than 100GB of free disk space there is an initialization error with raft DB related to insufficient space on the disk See this issue https github com hashicorp vault issues 14895 for details Windows users should wait till 1 10 1 to upgrade Feature deprecations and EOL Please refer to the Deprecation Plans and Notice vault docs deprecation page for up to date information on feature deprecations and plans An Feature Deprecation FAQ vault docs deprecation faq page is also available to address questions concerning decisions made about Vault feature deprecations |
vault Software Release date Oct 12 2022 Vault 1 12 0 release notes layout docs page title 1 12 0 This page contains release notes for Vault 1 12 0 | ---
layout: docs
page_title: 1.12.0
description: |-
This page contains release notes for Vault 1.12.0
---
# Vault 1.12.0 release notes
**Software Release date:** Oct. 12, 2022
**Summary:** Vault Release 1.12.0 offers features and enhancements that improve the user experience while solving critical issues previously encountered by our customers. We are providing an overview of improvements in this set of release notes.
We encourage you to upgrade to the latest release of Vault to take advantage of the new benefits provided. With this latest release, we offer solutions to critical feature gaps that were identified previously. Please refer to the [Changelog](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md) within the Vault release for further information on product improvements, including a comprehensive list of bug fixes.
Some of these enhancements and changes in this release include the following:
- Vault Enterprise now supports **PKCS#11** provider plugin (client library) functionality.
- Vault Enterprise can manage keys for **Oracle TDE**. This requires the Advanced Data Protection license.
- **PKI Key revocation** improvements are made to Vault’s PKI engine, introducing a new OCSP responder and automatic CRL rebuilding (with up-to-date Delta CRL), that offers significant performance and data transfer improvements to revocation workflows.
- **BYOK in Transform engines** now allow users to import their keys generated elsewhere.
- **KMIP Server Profile** adds support for additional operations, allowing Vault to claim support for the baseline server profile.
- **Transform secrets engine** supports time-based auto-key rotation for tokenization.
- **Path and Role-based Quotas** extend the existing Vault Quota support by allowing quotas to be extended to the API path suffixes and auth mount roles.
- **Licensing** termination behavior has changed where non-evaluation licenses (production licenses) will no longer have a termination date.
- **Redis Database Secrets Engine** is now available to manage static roles or generation of dynamic credentials, as well as root credential rotation on a stand-alone Redis server.
- **AWS Elasticache Database Secrets Engine** is introduced to manage static credentials for AWS Elasticache instances.
~> **Vault Enterprise:** Use [Integrated Storage](/vault/docs/configuration/storage/raft) or [Consul](/vault/docs/configuration/storage/consul) as your Vault's storage backend. Vault Enterprise will no longer start up if configured to use a storage backend other than Integrated Storage or Consul. (See the [Upgrade Guide](/vault/docs/upgrading/upgrade-to-1.12.x).)
## New features
This section describes the new features introduced in Vault 1.12.0.
### Transform secrets engine enhancements
-> **NOTE:** These features need the Vault Enterprise ADP License.
#### Bring your own key (BYOK) for transform
In release 1.11, we introduced BYOK support to Vault, enabling customers to import existing keys into the Vault Transit Secrets Engine and enabling secure and flexible Vault deployments.
We are extending that support to the Vault Transform Secrets Engine in this release.
#### MSSQL support
An MSSQL store is now available to be used as an external storage engine with tokenization Transform Secrets Engine. Refer to the following documents, [Transform Secrets Engine(API)](/vault/api-docs/secret/transform), [Transform Secrets Engine](/vault/docs/secrets/transform), and [Tokenization Transform](/vault/docs/secrets/transform/tokenization) for more information.
#### Key auto rotation
Periodic rotation of encryption keys is a recommended key management practice for a good security posture. In Vault release 1.10, we added support for Auto key rotation for Transit Secrets Engine. In Vault 1.12, the Transform secrets engine is now enhanced, allowing users to set the rotation policy during key creation in a time interval, which will cause Vault to rotate the Transform keys when the time interval elapses automatically.
Refer to the following documentation [Tokenization Transform](/vault/docs/secrets/transform/tokenization) and [Transform Secrets Engine(API)](/vault/api-docs/secret/transform#rotate-tokenization-key) for more information.
### PKI secrets engine improvements
#### PKI secrets engine revocation enhancements
We are improving Vault PKI Engine’s revocation capabilities by adding support for the Online Certificate Status Protocol (OCSP) and a delta Certificate Revocation List (CRL) to track changes to the main CRL. These enhancements significantly streamline customer experience with the PKI engine making the certification revocation semantics easier to understand and manage. Additionally, support for automatic CRL rotation and periodic tidy operations help reduce operator burden, alleviate the demand on cluster resources during periods of high revocation, and ensure clients are always served valid CRLs. Finally, support for Bring-Your-Own-Cert (BYOC) allows revocation of `no_store=true` certificates and for Proof-of-Possession (PoP) allows end-users to safely revoke their own certificates (with corresponding private key) without operator intervention.
#### PKI and managed key support for RSA-PSS signatures
Since its initial release, Vault's PKI secrets engine only supported RSA-PKCS#1v1.5 (Public Key Cryptographic Standards) signatures for issuers and leaves. To conform with NIST's guidance around key transport and for compatibility with newer HSM Firmware, we have included support for RSA-PSS signatures (Probabilistic Signature Scheme). See the section on [PSS Support in the PKI documentation](/vault/docs/secrets/pki/considerations#pss-support) for limitations of this feature.
#### PKI telemetry improvements
In this release, we are adding additional telemetry to Vault’s PKI secrets engine, enabling customers to gather better insights into certificate usage via the count of stored and revoked certificates. Additionally, the Vault `tidy` function is enhanced with additional metrics that reflect the remaining stored and revoked certificates.
#### Auto-fetch CRL in the certificate auth method
Operators will now be able to specify one or more CRL URLs that Vault will automatically fetch and keep up-to-date, rather than having to push the CRLs to the cert auth method. This should make certificate management easier for those users that have large cert auth deployments.
#### GCP Cloud key manager support
Managed Keys let Vault secrets engines (currently PKI) use keys stored in Cloud KMS systems for cryptographic operations like certificate signing. Vault 1.12 adds support for GCP Cloud KMS to the Managed Key system, where previously AWS, Azure, and PKCS#11 Hardware Security Modules were supported.
### KMIP server profile
The [Baseline Server Profile](https://docs.oasis-open.org/kmip/kmip-profiles/v2.1/os/kmip-profiles-v2.1-os.html) specifies the basic functionality expected of a KMIP server. In Vault 1.12, we offer support for the operations and attributes in the Baseline server profile. With this release, Vault Enterprise now supports the Symmetric Key lifecycle server profile, Baseline server profile, and the Basic Cryptographic server profile (as of Release 1.11), enabling the support of KMIP integrations with various clients more effectively. This requires the Vault Enterprise ADP license.
### SSH secrets engine support for generating keys
Previously, Vault's SSH Secrets Engine when used as an SSH CA required requesters to provide their own public key for signing. In Vault 1.12, Vault can now generate credential key pairs dynamically, returning them to the requester.
This was a community contributed enhancement.
### Path and Role-Based resource quotas
In this release, the existing resource quota functionality has been enhanced. In addition to applying the API rate limiting and lease quotas at the namespace or mount level, you can now use the quotas to the [API path suffixes and auth mount roles](/vault/docs/enterprise/lease-count-quotas). This enhancement provides users with more control over issued certificates.
### Client count improvements
The billing period for client counting API can now be specified with the [current month](/vault/docs/concepts/client-count) for the end date parameter. When this is done the "new_clients" field will have an hyperlog approximate value indicating the number of new clients that came in the current month. Note that for the previous months, the number will be an exact value.
### Redis database secrets engine
With the support of the Redis database secrets engine, users can use Vault to manage static and dynamic credentials for Redis OSS. The engine works similarly to other database secrets engines. Refer to the [Redis](/vault/docs/secrets/databases/redis) documentation for more information. Huge thanks to [Francis Hitchens](https://github.com/fhitchen), who contributed their repository to HashiCorp
### AWS elasticache database secrets engine
With the support of the AWS ElastiCache database secrets engine, users may use Vault to manage static credentials for AWS Elasticache instances. The engine will work similarly to other database secrets engines. Refer to the [elasticache](/vault/docs/secrets/databases/rediselasticache) documentation for more information.
### LDAP secrets engine
Vault 1.12 introduces a new LDAP secrets engine that unifies the user experience between the Active Directory (AD) secrets engine and OpenLDAP secrets engine. This new engine simplifies the user experience when Vault is used to manage credentials for directory services. This new engine supports all implementations from both of the engines mentioned above (AD, LDAP and RACF) and brings dynamic credential capabilities for users relying on Active Directory.
~> **Note:** This engine does _not_ replace the current Active Directory secrets engine. We will continue to maintain the engine and provide bug fixes, but encourage all new users to use the unified LDAP engine. We will communicate the schedule to deprecate the Active Directory secrets engine well in advance, providing time for users to migrate over.
### Terraform Vault provider: Vault version detection
Vault Terraform provider v3.9.0 can now query Vault to detect the server’s version of the server and then perform a semantic version comparison against a provided minimum threshold version to determine whether a selected feature is available for use. This allows for the Vault provider to deterministically anticipate Vault’s behavior.
### Plugin versioning
In prior versions of Vault, plugins were not “version-aware,” creating a suboptimal user experience during plugin installation and upgrades. In Vault 1.12, we are introducing the concept of versions to plugins, making plugins “version-aware” and allowing standardization of the release processes and offering a better user experience when installing and upgrading plugins.
### PKCS#11 client support
Software solutions often require cryptographic objects-like keys, X.509 certificates, or perform operations like a certificate or key generation, hashing, encryption, decryption, and signing. Hardware Security Modules (HSM) are traditionally used as a secure option, but are expensive and challenging to operationalize.
Vault Enterprise 1.12 is a PKCS#11 2.40 compliant provider, extended profile. PKCS#11 is the standard protocol supported for integrating with HSMs. Support for this protocol is the first step to enabling customers to consolidate HSMs. It also has the operational flexibility and advantages of software for key generation, encryption, and object storage operations. The PKCS#11 support in Vault 1.12 supports a subset of key generation, encryption, decryption and key storage operations. This requires the Enterprise ADP-KM license.
~> **Note:** With this feature, Vault does not become an HSM. HSMs are needed where customer use cases need FIPS 140-2 L2+ compliance support.
### Oracle TDE
With Vault 1.12, Vault Enterprise (ADP-KM) can now act as an external key manager for Oracle instances when Transparent Data Encryption is enabled. TDE allows users to conjure and use Vault to protect their Data Encryption Keys by using Vault to protect them using a Key Encryption Key. Reading and writing of data securely are handled transparently by Oracle database instances without needing user intervention. This will need the Enterprise ADP license.
### UI support for okta number challenge
In Vault 1.11, we added support for Okta’s Number Challenge feature in the CLI and API. In Vault 1.12, we’ve extended this support to the Vault UI, allowing users to complete the Okta Number Challenge from a web browser, the command line, and the HTTP API.
### OIDC provider support in the UI
Vault can now act as an OIDC provider for applications that wish to delegate authentication to Vault and leverage its identity system. As an OIDC provider, Vault supports PKCE for authorization code flow, preventing attacks such as SSRF. After OIDC provider functionality went GA, our design and user research team gathered feedback from community members, and we simplified the setup experience. With a few CLI commands or UI clicks, users can now have a default OIDC provider with its defaults configured and ready to go for applications to utilize the functionality.
## Other features and enhancements
### License termination behavior
The Licensing termination behavior has changed where non-evaluation licenses (production licenses) no longer have a termination date, making Vault more robust for Vault Enterprise customers. Also refer to the updated [licensing FAQ](/vault/docs/enterprise/license/faq) for more information.
### Namespace custom metadata
Customers can now specify [custom metadata](/vault/api-docs/system/namespaces) on the namespaces. The new `vault namespace patch` [command](/vault/docs/commands/namespace) can be used to update existing namespaces with custom metadata as well. This will make it possible to tag namespaces with additional fields (For example: owner, region department) describing it.
### Vault agent improvements
Vault Agent introduced new configuration parameters that will significantly improve the use of Vault Agent. These includes:
- Added `disable_idle_connections` configuration to disable leaving idle connections open in auto-auth, caching and templating.
- Added `disable_keep_alives` configuration to disable keep alives in auto-auth, caching and templating.
- JWT auto-auth now supports a `remove_jwt_after_reading` configuration option which defaults to true.
## Known issues
There are no known issues documented for this release.
## Feature deprecations and EOL
Please refer to the [Deprecation Plans and Notice](/vault/docs/deprecation) page for up-to-date information on feature deprecations and plans. A [Feature Deprecation FAQ](/vault/docs/deprecation/faq) page addresses questions about decisions made about Vault feature deprecations. | vault | layout docs page title 1 12 0 description This page contains release notes for Vault 1 12 0 Vault 1 12 0 release notes Software Release date Oct 12 2022 Summary Vault Release 1 12 0 offers features and enhancements that improve the user experience while solving critical issues previously encountered by our customers We are providing an overview of improvements in this set of release notes We encourage you to upgrade to the latest release of Vault to take advantage of the new benefits provided With this latest release we offer solutions to critical feature gaps that were identified previously Please refer to the Changelog https github com hashicorp vault blob main CHANGELOG md within the Vault release for further information on product improvements including a comprehensive list of bug fixes Some of these enhancements and changes in this release include the following Vault Enterprise now supports PKCS 11 provider plugin client library functionality Vault Enterprise can manage keys for Oracle TDE This requires the Advanced Data Protection license PKI Key revocation improvements are made to Vault s PKI engine introducing a new OCSP responder and automatic CRL rebuilding with up to date Delta CRL that offers significant performance and data transfer improvements to revocation workflows BYOK in Transform engines now allow users to import their keys generated elsewhere KMIP Server Profile adds support for additional operations allowing Vault to claim support for the baseline server profile Transform secrets engine supports time based auto key rotation for tokenization Path and Role based Quotas extend the existing Vault Quota support by allowing quotas to be extended to the API path suffixes and auth mount roles Licensing termination behavior has changed where non evaluation licenses production licenses will no longer have a termination date Redis Database Secrets Engine is now available to manage static roles or generation of dynamic credentials as well as root credential rotation on a stand alone Redis server AWS Elasticache Database Secrets Engine is introduced to manage static credentials for AWS Elasticache instances Vault Enterprise Use Integrated Storage vault docs configuration storage raft or Consul vault docs configuration storage consul as your Vault s storage backend Vault Enterprise will no longer start up if configured to use a storage backend other than Integrated Storage or Consul See the Upgrade Guide vault docs upgrading upgrade to 1 12 x New features This section describes the new features introduced in Vault 1 12 0 Transform secrets engine enhancements NOTE These features need the Vault Enterprise ADP License Bring your own key BYOK for transform In release 1 11 we introduced BYOK support to Vault enabling customers to import existing keys into the Vault Transit Secrets Engine and enabling secure and flexible Vault deployments We are extending that support to the Vault Transform Secrets Engine in this release MSSQL support An MSSQL store is now available to be used as an external storage engine with tokenization Transform Secrets Engine Refer to the following documents Transform Secrets Engine API vault api docs secret transform Transform Secrets Engine vault docs secrets transform and Tokenization Transform vault docs secrets transform tokenization for more information Key auto rotation Periodic rotation of encryption keys is a recommended key management practice for a good security posture In Vault release 1 10 we added support for Auto key rotation for Transit Secrets Engine In Vault 1 12 the Transform secrets engine is now enhanced allowing users to set the rotation policy during key creation in a time interval which will cause Vault to rotate the Transform keys when the time interval elapses automatically Refer to the following documentation Tokenization Transform vault docs secrets transform tokenization and Transform Secrets Engine API vault api docs secret transform rotate tokenization key for more information PKI secrets engine improvements PKI secrets engine revocation enhancements We are improving Vault PKI Engine s revocation capabilities by adding support for the Online Certificate Status Protocol OCSP and a delta Certificate Revocation List CRL to track changes to the main CRL These enhancements significantly streamline customer experience with the PKI engine making the certification revocation semantics easier to understand and manage Additionally support for automatic CRL rotation and periodic tidy operations help reduce operator burden alleviate the demand on cluster resources during periods of high revocation and ensure clients are always served valid CRLs Finally support for Bring Your Own Cert BYOC allows revocation of no store true certificates and for Proof of Possession PoP allows end users to safely revoke their own certificates with corresponding private key without operator intervention PKI and managed key support for RSA PSS signatures Since its initial release Vault s PKI secrets engine only supported RSA PKCS 1v1 5 Public Key Cryptographic Standards signatures for issuers and leaves To conform with NIST s guidance around key transport and for compatibility with newer HSM Firmware we have included support for RSA PSS signatures Probabilistic Signature Scheme See the section on PSS Support in the PKI documentation vault docs secrets pki considerations pss support for limitations of this feature PKI telemetry improvements In this release we are adding additional telemetry to Vault s PKI secrets engine enabling customers to gather better insights into certificate usage via the count of stored and revoked certificates Additionally the Vault tidy function is enhanced with additional metrics that reflect the remaining stored and revoked certificates Auto fetch CRL in the certificate auth method Operators will now be able to specify one or more CRL URLs that Vault will automatically fetch and keep up to date rather than having to push the CRLs to the cert auth method This should make certificate management easier for those users that have large cert auth deployments GCP Cloud key manager support Managed Keys let Vault secrets engines currently PKI use keys stored in Cloud KMS systems for cryptographic operations like certificate signing Vault 1 12 adds support for GCP Cloud KMS to the Managed Key system where previously AWS Azure and PKCS 11 Hardware Security Modules were supported KMIP server profile The Baseline Server Profile https docs oasis open org kmip kmip profiles v2 1 os kmip profiles v2 1 os html specifies the basic functionality expected of a KMIP server In Vault 1 12 we offer support for the operations and attributes in the Baseline server profile With this release Vault Enterprise now supports the Symmetric Key lifecycle server profile Baseline server profile and the Basic Cryptographic server profile as of Release 1 11 enabling the support of KMIP integrations with various clients more effectively This requires the Vault Enterprise ADP license SSH secrets engine support for generating keys Previously Vault s SSH Secrets Engine when used as an SSH CA required requesters to provide their own public key for signing In Vault 1 12 Vault can now generate credential key pairs dynamically returning them to the requester This was a community contributed enhancement Path and Role Based resource quotas In this release the existing resource quota functionality has been enhanced In addition to applying the API rate limiting and lease quotas at the namespace or mount level you can now use the quotas to the API path suffixes and auth mount roles vault docs enterprise lease count quotas This enhancement provides users with more control over issued certificates Client count improvements The billing period for client counting API can now be specified with the current month vault docs concepts client count for the end date parameter When this is done the new clients field will have an hyperlog approximate value indicating the number of new clients that came in the current month Note that for the previous months the number will be an exact value Redis database secrets engine With the support of the Redis database secrets engine users can use Vault to manage static and dynamic credentials for Redis OSS The engine works similarly to other database secrets engines Refer to the Redis vault docs secrets databases redis documentation for more information Huge thanks to Francis Hitchens https github com fhitchen who contributed their repository to HashiCorp AWS elasticache database secrets engine With the support of the AWS ElastiCache database secrets engine users may use Vault to manage static credentials for AWS Elasticache instances The engine will work similarly to other database secrets engines Refer to the elasticache vault docs secrets databases rediselasticache documentation for more information LDAP secrets engine Vault 1 12 introduces a new LDAP secrets engine that unifies the user experience between the Active Directory AD secrets engine and OpenLDAP secrets engine This new engine simplifies the user experience when Vault is used to manage credentials for directory services This new engine supports all implementations from both of the engines mentioned above AD LDAP and RACF and brings dynamic credential capabilities for users relying on Active Directory Note This engine does not replace the current Active Directory secrets engine We will continue to maintain the engine and provide bug fixes but encourage all new users to use the unified LDAP engine We will communicate the schedule to deprecate the Active Directory secrets engine well in advance providing time for users to migrate over Terraform Vault provider Vault version detection Vault Terraform provider v3 9 0 can now query Vault to detect the server s version of the server and then perform a semantic version comparison against a provided minimum threshold version to determine whether a selected feature is available for use This allows for the Vault provider to deterministically anticipate Vault s behavior Plugin versioning In prior versions of Vault plugins were not version aware creating a suboptimal user experience during plugin installation and upgrades In Vault 1 12 we are introducing the concept of versions to plugins making plugins version aware and allowing standardization of the release processes and offering a better user experience when installing and upgrading plugins PKCS 11 client support Software solutions often require cryptographic objects like keys X 509 certificates or perform operations like a certificate or key generation hashing encryption decryption and signing Hardware Security Modules HSM are traditionally used as a secure option but are expensive and challenging to operationalize Vault Enterprise 1 12 is a PKCS 11 2 40 compliant provider extended profile PKCS 11 is the standard protocol supported for integrating with HSMs Support for this protocol is the first step to enabling customers to consolidate HSMs It also has the operational flexibility and advantages of software for key generation encryption and object storage operations The PKCS 11 support in Vault 1 12 supports a subset of key generation encryption decryption and key storage operations This requires the Enterprise ADP KM license Note With this feature Vault does not become an HSM HSMs are needed where customer use cases need FIPS 140 2 L2 compliance support Oracle TDE With Vault 1 12 Vault Enterprise ADP KM can now act as an external key manager for Oracle instances when Transparent Data Encryption is enabled TDE allows users to conjure and use Vault to protect their Data Encryption Keys by using Vault to protect them using a Key Encryption Key Reading and writing of data securely are handled transparently by Oracle database instances without needing user intervention This will need the Enterprise ADP license UI support for okta number challenge In Vault 1 11 we added support for Okta s Number Challenge feature in the CLI and API In Vault 1 12 we ve extended this support to the Vault UI allowing users to complete the Okta Number Challenge from a web browser the command line and the HTTP API OIDC provider support in the UI Vault can now act as an OIDC provider for applications that wish to delegate authentication to Vault and leverage its identity system As an OIDC provider Vault supports PKCE for authorization code flow preventing attacks such as SSRF After OIDC provider functionality went GA our design and user research team gathered feedback from community members and we simplified the setup experience With a few CLI commands or UI clicks users can now have a default OIDC provider with its defaults configured and ready to go for applications to utilize the functionality Other features and enhancements License termination behavior The Licensing termination behavior has changed where non evaluation licenses production licenses no longer have a termination date making Vault more robust for Vault Enterprise customers Also refer to the updated licensing FAQ vault docs enterprise license faq for more information Namespace custom metadata Customers can now specify custom metadata vault api docs system namespaces on the namespaces The new vault namespace patch command vault docs commands namespace can be used to update existing namespaces with custom metadata as well This will make it possible to tag namespaces with additional fields For example owner region department describing it Vault agent improvements Vault Agent introduced new configuration parameters that will significantly improve the use of Vault Agent These includes Added disable idle connections configuration to disable leaving idle connections open in auto auth caching and templating Added disable keep alives configuration to disable keep alives in auto auth caching and templating JWT auto auth now supports a remove jwt after reading configuration option which defaults to true Known issues There are no known issues documented for this release Feature deprecations and EOL Please refer to the Deprecation Plans and Notice vault docs deprecation page for up to date information on feature deprecations and plans A Feature Deprecation FAQ vault docs deprecation faq page addresses questions about decisions made about Vault feature deprecations |
vault Key updates for Vault 1 16 1 Vault 1 16 1 release notes GA date 2024 04 04 layout docs page title 1 16 1 release notes | ---
layout: docs
page_title: "1.16.1 release notes"
description: |-
Key updates for Vault 1.16.1
---
# Vault 1.16.1 release notes
**GA date:** 2024-04-04
@include 'release-notes/intro.mdx'
## Important changes
| Version | Change |
|-----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.16.0+ | [Existing clusters do not show the current Vault version in UI by default](/vault/docs/upgrading/upgrade-to-1.16.x#default-policy-changes) |
| 1.16.0+ | [Default LCQ enabled when upgrading pre-1.9](/vault/docs/upgrading/upgrade-to-1.16.x#default-lcq-pre-1.9-upgrade) |
| 1.16.0+ | [External plugin environment variables take precedence over server variables](/vault/docs/upgrading/upgrade-to-1.16.x#external-plugin-variables) |
| 1.16.0+ | [LDAP auth entity alias names no longer include upndomain](/vault/docs/upgrading/upgrade-to-1.16.x#ldap-auth-entity-alias-names-no-longer-include-upndomain) |
| 1.16.0+ | [Secrets Sync now requires a one-time flag to operate](/vault/docs/upgrading/upgrade-to-1.16.x#secrets-sync-now-requires-setting-a-one-time-flag-before-use) |
| 1.16.0+ | [Azure secrets engine role creation failing](/vault/docs/upgrading/upgrade-to-1.16.x#azure-secrets-engine-role-creation-failing) |
| 1.16.1 - 1.16.3 | [New nodes added by autopilot upgrades provisioned with the wrong version](/vault/docs/upgrading/upgrade-to-1.15.x#new-nodes-added-by-autopilot-upgrades-provisioned-with-the-wrong-version) |
| 1.15.8+ | [Autopilot upgrade for Vault Enterprise fails](/vault/docs/upgrading/upgrade-to-1.15.x#autopilot) |
| 1.16.5 | [Listener stops listening on untrusted upstream connection with particular config settings](/vault/docs/upgrading/upgrade-to-1.16.x#listener-proxy-protocol-config) |
| 1.16.3 - 1.16.6 | [Vault standby nodes not deleting removed entity-aliases from in-memory database](/vault/docs/upgrading/upgrade-to-1.16.x#dangling-entity-alias-in-memory) |
| 0.7.0+ | [Duplicate identity groups created](/vault/docs/upgrading/upgrade-to-1.16.x#duplicate-identity-groups-created-when-concurrent-requests-sent-to-the-primary-and-pr-secondary-cluster) | |
| Known Issue (0.7.0+) | [Manual entity merges fail](/vault/docs/upgrading/upgrade-to-1.16.x#manual-entity-merges-sent-to-a-pr-secondary-cluster-are-not-persisted-to-storage) |
| Known Issue (1.16.7-1.16.8) | [Some values in the audit logs not hmac'd properly](/vault/docs/upgrading/upgrade-to-1.16.x#client-tokens-and-token-accessors-audited-in-plaintext) |
| New default (1.16.13) | [Vault product usage metrics reporting](/vault/docs/upgrading/upgrade-to-1.6.x#product-usage-reporting) |
| Deprecation (1.16.13) | [`default_report_months` is deprecated for the `sys/internal/counters` API](/vault/docs/upgrading/upgrade-to-1.16.x#activity-log-changes) |
## Vault companion updates
Companion updates are Vault updates that live outside the main Vault binary.
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style=>
Vault Secrets Operator (v0.5)
</td>
<td style=>ENHANCED</td>
<td style=>
Use templating to format, transform, and decode secrets before syncing to
Kubernetes secret.
<br /><br />
Learn more: <a href="/vault/docs/platform/k8s/vso/secret-transformation">Secret data transformation</a>
</td>
</tr>
</tbody>
</table>
## Core updates
Follow the learn more links for more information, or browse the list of
[Vault tutorials updated to highlight changes for the most recent GA release](/vault/tutorials/new-release).
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style=>
Endpoint hardening
</td>
<td style=>ENHANCED</td>
<td style=>
Minimize network exposure by selectively redacting select fields like IP
addresses, cluster names, and Vault version from the HTTP responses of
your Vault server.
<br /><br />
Learn more:
<a href="/vault/docs/configuration/listener/tcp#redact_addresses"><tt>redact_addresses</tt> parameter</a>
</td>
</tr>
<tr>
<td style=>
External plugins
</td>
<td style=>GA</td>
<td style=>
Run external plugins in their own container with native container platform
controls.
<br /><br />
Learn more: <a href="/vault/docs/plugins/containerized-plugins">Containerize Vault plugins</a>
</td>
</tr>
</tbody>
</table>
## Enterprise updates
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style=>
Long-term support
</td>
<td style=>GA</td>
<td style=>
Reduce risk and operational overhead with Vault Enterprise Long-Term
Support (LTS) releases.
<br /><br />
Learn more: <a href="/vault/docs/enterprise/lts">LTS overview</a>
</td>
</tr>
<tr>
<td style=>
Vault GUI
</td>
<td style=>GA</td>
<td style=>
Configure custom messages and display those messages to targeted users in
the Vault GUI.
<br /><br />
Learn more: <a href="/vault/docs/ui/custom-messages">Custom UI messages</a>
</td>
</tr>
<tr>
<td style=>
Audit logging
</td>
<td style=>GA</td>
<td style=>
Filter audit logs to write data to different destinations based on the content.
<br /><br />
Learn more: <a href="/vault/docs/enterprise/audit/filtering">Filter syntax for audit results</a>
</td>
</tr>
<tr>
<td style=>
Static secret caching
</td>
<td style=>GA</td>
<td style=>
Use Vault Proxy to cache static secrets for a set period of time and receive
event notifications when secrets change.
<br /><br />
Learn more: <a href="/vault/docs/agent-and-proxy/proxy/caching/static-secret-caching">Vault Proxy static secret caching</a>
</td>
</tr>
<tr>
<td style=>
Event notifications
</td>
<td style=>GA</td>
<td style=>
Subscribe to notifications for various events in Vault. Includes support
for filtering, permissions, and cluster configurations with K-V secrets.
<br /><br />
Learn more: <a href="/vault/docs/concepts/events">Events</a>
</td>
</tr>
<tr>
<td style=>
Public Key Infrastructure (PKI)
</td>
<td style=>BETA</td>
<td style=>
Automate certificate lifecycle management for IoT/EST enabled devices with
native EST protocol support
<br /><br />
Learn more: <a href="/vault/docs/secrets/pki/est">Enrollment over Secure Transport (EST)</a>
</td>
</tr>
<tr>
<td style=>
Default lease count quotas
</td>
<td style=>GA</td>
<td style=>
New server deployments automatically create a lease count quota in the
root namespace with a 300K limit.
<br /><br />
Learn more: <a href="/vault/docs/enterprise/lease-count-quotas">Lease count quotas</a>
</td>
</tr>
<tr>
<td style=>
License utilization reporting
</td>
<td style=>ENHANCED</td>
<td style=>
Use the Vault CLI to bundle and report usage data to HashiCorp for
clusters that do not report license utilization data automatically.
<br /><br />
Learn more: <a href="/vault/docs/enterprise/license/manual-reporting">Manual license utilization reporting</a>
</td>
</tr>
<tr>
<td style=>
Secrets sync
</td>
<td style=>GA</td>
<td style=>
Sync Key Value (KV) v2 data between Vault and secrets managers from AWS,
Azure, Google Cloud Platform (GCP), GitHub, and Vercel.
<br /><br />
Learn more: <a href="/vault/docs/sync">Secrets Sync</a>
</td>
</tr>
<tr>
<td style=>
AWS plugin
</td>
<td style=>GA</td>
<td style=>
Use automatic identity tokes for workload identity federation
authentication flows with the AWS secret engine without explicitly
configuring sensitive security credentials.
<br /><br />
Learn more: <a href="/vault/docs/secrets/aws">AWS secrets engine</a>
</td>
</tr>
</tbody>
</table>
## Feature deprecations and EOL
Deprecated in 1.16 | Retired in 1.16
------------------ | ---------------
None | None
@include 'release-notes/deprecation-note.mdx' | vault | layout docs page title 1 16 1 release notes description Key updates for Vault 1 16 1 Vault 1 16 1 release notes GA date 2024 04 04 include release notes intro mdx Important changes Version Change 1 16 0 Existing clusters do not show the current Vault version in UI by default vault docs upgrading upgrade to 1 16 x default policy changes 1 16 0 Default LCQ enabled when upgrading pre 1 9 vault docs upgrading upgrade to 1 16 x default lcq pre 1 9 upgrade 1 16 0 External plugin environment variables take precedence over server variables vault docs upgrading upgrade to 1 16 x external plugin variables 1 16 0 LDAP auth entity alias names no longer include upndomain vault docs upgrading upgrade to 1 16 x ldap auth entity alias names no longer include upndomain 1 16 0 Secrets Sync now requires a one time flag to operate vault docs upgrading upgrade to 1 16 x secrets sync now requires setting a one time flag before use 1 16 0 Azure secrets engine role creation failing vault docs upgrading upgrade to 1 16 x azure secrets engine role creation failing 1 16 1 1 16 3 New nodes added by autopilot upgrades provisioned with the wrong version vault docs upgrading upgrade to 1 15 x new nodes added by autopilot upgrades provisioned with the wrong version 1 15 8 Autopilot upgrade for Vault Enterprise fails vault docs upgrading upgrade to 1 15 x autopilot 1 16 5 Listener stops listening on untrusted upstream connection with particular config settings vault docs upgrading upgrade to 1 16 x listener proxy protocol config 1 16 3 1 16 6 Vault standby nodes not deleting removed entity aliases from in memory database vault docs upgrading upgrade to 1 16 x dangling entity alias in memory 0 7 0 Duplicate identity groups created vault docs upgrading upgrade to 1 16 x duplicate identity groups created when concurrent requests sent to the primary and pr secondary cluster Known Issue 0 7 0 Manual entity merges fail vault docs upgrading upgrade to 1 16 x manual entity merges sent to a pr secondary cluster are not persisted to storage Known Issue 1 16 7 1 16 8 Some values in the audit logs not hmac d properly vault docs upgrading upgrade to 1 16 x client tokens and token accessors audited in plaintext New default 1 16 13 Vault product usage metrics reporting vault docs upgrading upgrade to 1 6 x product usage reporting Deprecation 1 16 13 default report months is deprecated for the sys internal counters API vault docs upgrading upgrade to 1 16 x activity log changes Vault companion updates Companion updates are Vault updates that live outside the main Vault binary table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td style Vault Secrets Operator v0 5 td td style ENHANCED td td style Use templating to format transform and decode secrets before syncing to Kubernetes secret br br Learn more a href vault docs platform k8s vso secret transformation Secret data transformation a td tr tbody table Core updates Follow the learn more links for more information or browse the list of Vault tutorials updated to highlight changes for the most recent GA release vault tutorials new release table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td style Endpoint hardening td td style ENHANCED td td style Minimize network exposure by selectively redacting select fields like IP addresses cluster names and Vault version from the HTTP responses of your Vault server br br Learn more nbsp a href vault docs configuration listener tcp redact addresses tt redact addresses tt parameter a td tr tr td style External plugins td td style GA td td style Run external plugins in their own container with native container platform controls br br Learn more a href vault docs plugins containerized plugins Containerize Vault plugins a td tr tbody table Enterprise updates table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td style Long term support td td style GA td td style Reduce risk and operational overhead with Vault Enterprise Long Term Support LTS releases br br Learn more a href vault docs enterprise lts LTS overview a td tr tr td style Vault GUI td td style GA td td style Configure custom messages and display those messages to targeted users in the Vault GUI br br Learn more a href vault docs ui custom messages Custom UI messages a td tr tr td style Audit logging td td style GA td td style Filter audit logs to write data to different destinations based on the content br br Learn more a href vault docs enterprise audit filtering Filter syntax for audit results a td tr tr td style Static secret caching td td style GA td td style Use Vault Proxy to cache static secrets for a set period of time and receive event notifications when secrets change br br Learn more a href vault docs agent and proxy proxy caching static secret caching Vault Proxy static secret caching a td tr tr td style Event notifications td td style GA td td style Subscribe to notifications for various events in Vault Includes support for filtering permissions and cluster configurations with K V secrets br br Learn more a href vault docs concepts events Events a td tr tr td style Public Key Infrastructure PKI td td style BETA td td style Automate certificate lifecycle management for IoT EST enabled devices with native EST protocol support br br Learn more a href vault docs secrets pki est Enrollment over Secure Transport EST a td tr tr td style Default lease count quotas td td style GA td td style New server deployments automatically create a lease count quota in the root namespace with a 300K limit br br Learn more a href vault docs enterprise lease count quotas Lease count quotas a td tr tr td style License utilization reporting td td style ENHANCED td td style Use the Vault CLI to bundle and report usage data to HashiCorp for clusters that do not report license utilization data automatically br br Learn more a href vault docs enterprise license manual reporting Manual license utilization reporting a td tr tr td style Secrets sync td td style GA td td style Sync Key Value KV v2 data between Vault and secrets managers from AWS Azure Google Cloud Platform GCP GitHub and Vercel br br Learn more a href vault docs sync Secrets Sync a td tr tr td style AWS plugin td td style GA td td style Use automatic identity tokes for workload identity federation authentication flows with the AWS secret engine without explicitly configuring sensitive security credentials br br Learn more a href vault docs secrets aws AWS secrets engine a td tr tbody table Feature deprecations and EOL Deprecated in 1 16 Retired in 1 16 None None include release notes deprecation note mdx |
vault GA date June 21 2023 Vault 1 14 0 release notes layout docs page title 1 14 0 release notes Key updates for Vault 1 14 0 | ---
layout: docs
page_title: "1.14.0 release notes"
description: |-
Key updates for Vault 1.14.0
---
# Vault 1.14.0 release notes
**GA date:** June 21, 2023
@include 'release-notes/intro.mdx'
## Known issues and breaking changes
Version | Issue
------- | ------------------------------------------------------------
1.14.0+ | [Users limited by control groups can only access issuer detail from PKI overview page](/vault/docs/upgrading/upgrade-to-1.14.x#ui-pki-control-groups)
All | [API calls to update-primary may lead to data loss](/vault/docs/upgrading/upgrade-to-1.14.x#update-primary-data-loss)
1.14.0+ | [AWS static roles ignore changes to rotation period](/vault/docs/upgrading/upgrade-to-1.14.x#aws-static-role-rotation)
1.14.0+ | [UI Collapsed navbar does not allow certain click events](/vault/docs/upgrading/upgrade-to-1.14.x#ui-collapsed-navbar)
1.14.3 - 1.14.5 | [Vault storing references to ephemeral sub-loggers leading to unbounded memory consumption](/vault/docs/upgrading/upgrade-to-1.14.x#vault-is-storing-references-to-ephemeral-sub-loggers-leading-to-unbounded-memory-consumption)
1.14.4 - 1.14.5 | [Internal error when vault policy in namespace does not exist](/vault/docs/upgrading/upgrade-to-1.14.x#internal-error-when-vault-policy-in-namespace-does-not-exist)
1.14.0+ | [Sublogger levels not adjusted on reload](/vault/docs/upgrading/upgrade-to-1.14.x#sublogger-levels-unchanged-on-reload)
1.14.5 | [Fatal error during expiration metrics gathering causing Vault crash](/vault/docs/upgrading/upgrade-to-1.15.x#fatal-error-during-expiration-metrics-gathering-causing-vault-crash)
1.14.5 | [User lockout potential double logging](/vault/docs/upgrading/upgrade-to-1.14.x#user-lockout-logging)
1.14.5 - 1.14.9 | [Deadlock can occur on performance secondary clusters with many mounts](/vault/docs/upgrading/upgrade-to-1.14.x#deadlock-can-occur-on-performance-secondary-clusters-with-many-mounts)
## Vault companion updates
Companion updates are Vault updates that live outside the main Vault binary.
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style=>
Vault Secrets Operator for Kubernetes
</td>
<td style=>GA</td>
<td style=>
Directly connect Vault secrets into Pods as native Kubernetes Secrets
without modifying your application code.
<br /><br />
Learn more: <a href="/vault/docs/platform/k8s/vso">Vault Secrets Operator</a>
</td>
</tr>
<tr>
<td rowspan={2} style=>
Terraform
</td>
<td style=>GA</td>
<td style=>
Use LDAP authentication from the unified LDAP engine to Terraform Vault
Provider.
<br /><br />
Learn more: <a href="/vault/docs/secrets/ldap">LDAP Secrets Engine</a>
</td>
</tr>
<tr>
<td style=>ENHANCED</td>
<td style=>
Support for additional PKI issuers and keys endpoints.
<br /><br />
Learn more: <a href="/vault/docs/secrets/pki">PKI Secrets Engine</a>
</td>
</tr>
</tbody>
</table>
## Core updates
Follow the learn more links for more information, or browse the list of
[Vault tutorials updated to highlight changes for the most recent GA release](/vault/tutorials/new-release).
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan={2} style=>
Public Key Infrastructure (PKI)
</td>
<td style=>GA</td>
<td style=>
Use ACME to automate certificate lifecycle management for private PKI
needs with standard ACME clients like Certbot and k8s cert-manager.
Request certificates from a Vault server without needing to know Vault
APIs or authentication mechanisms.
<br /><br />
Learn more:
<a href="/vault/api-docs/secret/pki#acme-certificate-issuance">PKI Secrets Engine API: ACME</a>
</td>
</tr>
<tr>
<td style=>GA</td>
<td style=>
Use the improved PKI web UI to manage your PKI instance with intuitive
configuration and reasonable defaults for workflows, metadata, issuer
info, mount and tidy configuration, cross signing, multi-issuers etc.and
includes.
<br /><br />
Learn more:
<a href="/vault/api-docs/secret/pki#acme-certificate-issuance">PKI Secrets Engine</a>
</td>
</tr>
<tr>
<td style=>
Security patches
</td>
<td style=>ENHANCED</td>
<td style=>
Various security improvements to remediate low severity and informational
findings from a 3rd party security audit.
<br /><br />
Learn more: <a href="/vault/docs/internals/security">Vault security model</a>
</td>
</tr>
<tr>
<td rowspan={2} style=>
Vault Agent
</td>
<td style=>BETA</td>
<td style=>
Fetch secrets directly into your application as environment variables.
<br /><br />
Learn more: <a href="/vault/docs/agent-and-proxy/agent/process-supervisor">Process Supervisor Mode</a>
</td>
</tr>
<tr>
<td style=>GA</td>
<td style=>
Use a new subcommand and daemon, Vault Proxy, to access the proxy
functionality of Vault Agent. Vault Proxy will handle Vault Agent proxy
functionality going forward to simplify use case decisions for users.
<br /><br />
Learn more: <a href="/vault/docs/agent-and-proxy/proxy">Vault Proxy</a>
</td>
</tr>
<tr>
<td rowspan={3} style=>
Plugin support
</td>
<td style=>GA</td>
<td style=>
Capture plugin metadata in the Vault audit log.
<br /><br />
Learn more: <a href="/vault/docs/audit/syslog">Syslog audit device</a>
</td>
</tr>
<tr>
<td style=>GA</td>
<td style=>
Use X509 Authentication and Terraform Vault Provider in the MongoDB Atlas
Database Secrets Engine.
<br /><br />
Learn more:
<a href="/vault/docs/secrets/databases/mongodbatlas">MongoDB Atlas Database Secrets Engine</a>
</td>
</tr>
<tr>
<td style=>ENHANCED</td>
<td style=>
Dependency updates and more robust multiplexing for secrets and
authentication plugins.
<br /><br />
Learn more:
<a href="/vault/docs/plugins/plugin-development#serving-a-plugin-with-multiplexing">
Serving a plugin with multiplexing (Plugin Development)
</a>
</td>
</tr>
<tr>
<td rowspan={2} style=>
AWS support
</td>
<td style=>ENHANCED</td>
<td style=>
Monitoring and performance enhancements for the Vault Lambda extension.
<br /><br />
Learn more:
<a href="/vault/docs/platform/aws/lambda-extension">Vault Lambda Extension guide</a>
</td>
</tr>
<tr>
<td style=>GA</td>
<td style=>
Use static roles for IAM users in the AWS Secrets Engine.
<br /><br />
Learn more: <a href="/vault/docs/secrets/aws">AWS Secrets Engine</a>
</td>
</tr>
<tr>
<td style=>
Vault GUI
</td>
<td style=>ENHANCED</td>
<td style=>
Streamlined and aligned navigation with HCP Vault UI.
<br /><br />
Learn more: <a href="/vault/docs/configuration/ui">Vault UI</a>
</td>
</tr>
<tr>
<td style=>
Transit
</td>
<td style=>ENHANCED</td>
<td style=>
<b>Contributed by the Vault community</b>. Support for public-key only Transit
keys and BYOK-secured export of key material.
<br /><br />
Learn more: <a href="/vault/api-docs/secret/transit">Transit Secrets Engine</a>
</td>
</tr>
</tbody>
</table>
## Enterprise updates
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style=>
Vault replication
</td>
<td style=>ENHANCED</td>
<td style=>
Stability improvements based on customer feedback for Vault 1.13. See the
<a href="https://raw.githubusercontent.com/hashicorp/vault/main/CHANGELOG.md">
Vault changelog
</a>
for a full list of bug fixes.
<br /><br />
Learn more:
<a href="/vault/docs/internals/replication">Replication overview</a>
</td>
</tr>
<tr>
<td style=>
License utilization reporting
</td>
<td style=>GA</td>
<td style=>
Enables automatic license utilization reporting for you and HashiCorp to
ensure transparent, accurate billing.
<br /><br />
Learn more:
<a href="/vault/docs/enterprise/license/utilization-reporting">Automated License utilization reporting</a>
</td>
</tr>
</tbody>
</table>
## Feature deprecations and EOL
Deprecated in 1.14 | Retired in 1.14
------------------ | ---------------
Vault Agent API proxy support | [Duplicative Docker Images](https://hub.docker.com/_/vault)
@include 'release-notes/deprecation-note.mdx' | vault | layout docs page title 1 14 0 release notes description Key updates for Vault 1 14 0 Vault 1 14 0 release notes GA date June 21 2023 include release notes intro mdx Known issues and breaking changes Version Issue 1 14 0 Users limited by control groups can only access issuer detail from PKI overview page vault docs upgrading upgrade to 1 14 x ui pki control groups All API calls to update primary may lead to data loss vault docs upgrading upgrade to 1 14 x update primary data loss 1 14 0 AWS static roles ignore changes to rotation period vault docs upgrading upgrade to 1 14 x aws static role rotation 1 14 0 UI Collapsed navbar does not allow certain click events vault docs upgrading upgrade to 1 14 x ui collapsed navbar 1 14 3 1 14 5 Vault storing references to ephemeral sub loggers leading to unbounded memory consumption vault docs upgrading upgrade to 1 14 x vault is storing references to ephemeral sub loggers leading to unbounded memory consumption 1 14 4 1 14 5 Internal error when vault policy in namespace does not exist vault docs upgrading upgrade to 1 14 x internal error when vault policy in namespace does not exist 1 14 0 Sublogger levels not adjusted on reload vault docs upgrading upgrade to 1 14 x sublogger levels unchanged on reload 1 14 5 Fatal error during expiration metrics gathering causing Vault crash vault docs upgrading upgrade to 1 15 x fatal error during expiration metrics gathering causing vault crash 1 14 5 User lockout potential double logging vault docs upgrading upgrade to 1 14 x user lockout logging 1 14 5 1 14 9 Deadlock can occur on performance secondary clusters with many mounts vault docs upgrading upgrade to 1 14 x deadlock can occur on performance secondary clusters with many mounts Vault companion updates Companion updates are Vault updates that live outside the main Vault binary table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td style Vault Secrets Operator for Kubernetes td td style GA td td style Directly connect Vault secrets into Pods as native Kubernetes Secrets without modifying your application code br br Learn more a href vault docs platform k8s vso Vault Secrets Operator a td tr tr td rowspan 2 style Terraform td td style GA td td style Use LDAP authentication from the unified LDAP engine to Terraform Vault Provider br br Learn more a href vault docs secrets ldap LDAP Secrets Engine a td tr tr td style ENHANCED td td style Support for additional PKI issuers and keys endpoints br br Learn more a href vault docs secrets pki PKI Secrets Engine a td tr tbody table Core updates Follow the learn more links for more information or browse the list of Vault tutorials updated to highlight changes for the most recent GA release vault tutorials new release table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td rowspan 2 style Public Key Infrastructure PKI td td style GA td td style Use ACME to automate certificate lifecycle management for private PKI needs with standard ACME clients like Certbot and k8s cert manager Request certificates from a Vault server without needing to know Vault APIs or authentication mechanisms br br Learn more nbsp a href vault api docs secret pki acme certificate issuance PKI Secrets Engine API ACME a td tr tr td style GA td td style Use the improved PKI web UI to manage your PKI instance with intuitive configuration and reasonable defaults for workflows metadata issuer info mount and tidy configuration cross signing multi issuers etc and includes br br Learn more nbsp a href vault api docs secret pki acme certificate issuance PKI Secrets Engine a td tr tr td style Security patches td td style ENHANCED td td style Various security improvements to remediate low severity and informational findings from a 3rd party security audit br br Learn more a href vault docs internals security Vault security model a td tr tr td rowspan 2 style Vault Agent td td style BETA td td style Fetch secrets directly into your application as environment variables br br Learn more a href vault docs agent and proxy agent process supervisor Process Supervisor Mode a td tr tr td style GA td td style Use a new subcommand and daemon Vault Proxy to access the proxy functionality of Vault Agent Vault Proxy will handle Vault Agent proxy functionality going forward to simplify use case decisions for users br br Learn more a href vault docs agent and proxy proxy Vault Proxy a td tr tr td rowspan 3 style Plugin support td td style GA td td style Capture plugin metadata in the Vault audit log br br Learn more a href vault docs audit syslog Syslog audit device a td tr tr td style GA td td style Use X509 Authentication and Terraform Vault Provider in the MongoDB Atlas Database Secrets Engine br br Learn more nbsp a href vault docs secrets databases mongodbatlas MongoDB Atlas Database Secrets Engine a td tr tr td style ENHANCED td td style Dependency updates and more robust multiplexing for secrets and authentication plugins br br Learn more nbsp a href vault docs plugins plugin development serving a plugin with multiplexing Serving a plugin with multiplexing Plugin Development a td tr tr td rowspan 2 style AWS support td td style ENHANCED td td style Monitoring and performance enhancements for the Vault Lambda extension br br Learn more nbsp a href vault docs platform aws lambda extension Vault Lambda Extension guide a td tr tr td style GA td td style Use static roles for IAM users in the AWS Secrets Engine br br Learn more a href vault docs secrets aws AWS Secrets Engine a td tr tr td style Vault GUI td td style ENHANCED td td style Streamlined and aligned navigation with HCP Vault UI br br Learn more a href vault docs configuration ui Vault UI a td tr tr td style Transit td td style ENHANCED td td style b Contributed by the Vault community b Support for public key only Transit keys and BYOK secured export of key material br br Learn more a href vault api docs secret transit Transit Secrets Engine a td tr tbody table Enterprise updates table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td style Vault replication td td style ENHANCED td td style Stability improvements based on customer feedback for Vault 1 13 See the a href https raw githubusercontent com hashicorp vault main CHANGELOG md Vault changelog a for a full list of bug fixes br br Learn more nbsp a href vault docs internals replication Replication overview a td tr tr td style License utilization reporting td td style GA td td style Enables automatic license utilization reporting for you and HashiCorp to ensure transparent accurate billing br br Learn more nbsp a href vault docs enterprise license utilization reporting Automated License utilization reporting a td tr tbody table Feature deprecations and EOL Deprecated in 1 14 Retired in 1 14 Vault Agent API proxy support Duplicative Docker Images https hub docker com vault include release notes deprecation note mdx |
vault Software Release date March 1 2023 Vault 1 13 0 release notes page title 1 13 0 layout docs This page contains release notes for Vault 1 13 0 | ---
layout: docs
page_title: 1.13.0
description: |-
This page contains release notes for Vault 1.13.0
---
# Vault 1.13.0 release notes
**Software Release date:** March 1, 2023
**Summary:** Vault Release 1.13.0 offers features and enhancements that improve
the user experience while solving critical issues previously encountered by our
customers. We are providing an overview of improvements in this set of release
notes.
We encourage you to [upgrade](/vault/docs/upgrading) to the latest release of
Vault to take advantage of the new benefits provided. With this latest release,
we offer solutions to critical feature gaps that were identified previously.
Please refer to the
[Changelog](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1130-rc1)
within the Vault release for further information on product improvements,
including a comprehensive list of bug fixes.
Some of these enhancements and changes in this release include the following:
- **PKI improvements:**
- **Cross Cluster PKI Certificate Revocation:** Introducing a new unified
OCSP responder and CRL builder that enables a certificate revocations and
CRL view across clusters for a given PKI mount.
- **PKI UI Beta:** New UI introducing cross-signing flow, overview page,
roles and keys view.
- **Health Checks:** Provide a health overview of PKI mounts for proactive
actions and troubleshooting.
- **Command Line:** Simplified CLI to discover, rotate issuers and related
commands for PKI mounts
- **Azure Auth Improvements:**
- **Rotate-root support:** Add the ability to rotate the root account's
client secret defined in the auth method's configuration via the new
`rotate-root` endpoint.
- **Managed Identities authentication:** The auth method now allows any Azure
resource that supports managed identities to authenticate with Vault.
- **VMSS Flex authentication:** Add support for Virtual Machine Scale Set
(VMSS) Flex authentication.
- **GCP Secrets Impersonated Account Support:** Add support for GCP service
account impersonation, allowing callers to generate a GCP access token without
requiring Vault to store or retrieve a GCP service account key for each role.
- **Managed Keys in Transit Engine:** Support for offloading Transit Key
operations to HSMs/external KMS.
- **KMIP Secret Engine Enhancements:** Implemented Asymmetric Key Lifecycle
Server and Advanced Cryptographic Server profiles. Added support for RSA keys
and operations such as: MAC, MAC Verify, Sign, Sign Verify, RNG Seed and RNG
Retrieve.
- **Vault as a SSM:** Support is planned for an upcoming Vault PKCS#11 Provider
version to include mechanisms for encryption, decryption, signing and
signature verification for AES and RSA keys.
- **Replication (enterprise):** We fixed a bug that could cause a cluster to
wind up in a permanent merkle-diff/merkle-sync loop and never enter
stream-wals, particularly in cases of high write loads on the primary cluster.
- **Share Secrets in Independent Namespaces (enterprise):** You can now add
users from namespaces outside a namespace hierarchy to a group in a given
namespace hierarchy. For Vault Agent, you can now grant it access to secrets
outside the namespace where it authenticated, and reduce the number of Agents
you need to run.
- **User Lockout:** Vault now supports configuration to lock out users when they
have consecutive failed login attempts. This feature is **enabled by default**
in 1.13 for the userpass, ldap, and approle auth methods.
- **Event System (Alpha):** Vault has a new experimental event system. Events
are currently only generated on writes to the KV secrets engine, but external
plugins can also be updated to start generating events.
- **Kubernetes authentication plugin bug fix:** Ensures a consistent TLS
configuration for all k8s API requests. This fixes a bug where it was possible
for the http.Client's Transport to be missing the necessary root CAs to ensure
that all TLS connections between the auth engine and the Kubernetes API were
validated against the configured set of CA certificates.
- **Kubernetes Secretes Engine on Vault UI:** Introducing Kubernetes secret
engine support on the UI
- **Client Count UI improvements:** Combining current month and previous history
into one dashboard
- **OCSP Support in the TLS Certificate Auth Method:** The auth method now can
check for revoked certificates using the OCSP protocol.
- **UI Wizard removal:** The UI Wizard has been removed from the UI since the
information was occasionally out-of-date and did not align with the latest
changes. A new and enhanced UI experience is planned in a future release.
- **Vault Agent improvements:**
- Auto-auth introduced `token_file` method which reads an existing token from
a file. The token file method is designed for development and testing. It
is not suitable for production deployment.
- Listeners for the Vault Agent can define a role set to `metrics_only` so
that a service can be configured to listen on a particular port to collect
metrics.
- Vault Agent can read configurations from multiple files.
- Users can specify the log file path using the `-log-file` command flag or
`VAULT_LOG_FILE` environment variable. This is particularly useful when
Vault Agent is running as a Windows service.
- **OpenAPI-based Go & .NET Client Libraries (Public Beta):** Use the new Go &
.NET client libraries to interact with the Vault API from your applications.
- [OpenAPI-based Go client library](https://github.com/hashicorp/vault-client-go/)
- [OpenAPI-based .NET client library](https://github.com/hashicorp/vault-client-dotnet/)
## Known issues
When Vault is configured without a TLS certificate on the TCP listener, the Vault UI may throw an error that blocks you from performing operational tasks.
The error message: `Q.randomUUID is not a function`
<Note>
Refer to this [Knowledge Base article](https://support.hashicorp.com/hc/en-us/articles/14512496697875) for more details and a workaround.
</Note>
The fix for this UI issue is coming in the Vault 1.13.1 release.
@include 'perf-standby-token-create-forwarding-failure.mdx'
@include 'known-issues/update-primary-data-loss.mdx'
@include 'known-issues/internal-error-namespace-missing-policy.mdx'
@include 'known-issues/ephemeral-loggers-memory-leak.mdx'
@include 'known-issues/sublogger-levels-unchanged-on-reload.mdx'
@include 'known-issues/expiration-metrics-fatal-error.mdx'
@include 'known-issues/perf-secondary-many-mounts-deadlock.mdx'
@include 'known-issues/1_13-reload-census-panic-standby.mdx'
## Feature deprecations and EOL
Please refer to the [Deprecation Plans and Notice](/vault/docs/deprecation) page
for up-to-date information on feature deprecations and plans. A [Feature
Deprecation FAQ](/vault/docs/deprecation/faq) page addresses questions about
decisions made about Vault feature deprecations. | vault | layout docs page title 1 13 0 description This page contains release notes for Vault 1 13 0 Vault 1 13 0 release notes Software Release date March 1 2023 Summary Vault Release 1 13 0 offers features and enhancements that improve the user experience while solving critical issues previously encountered by our customers We are providing an overview of improvements in this set of release notes We encourage you to upgrade vault docs upgrading to the latest release of Vault to take advantage of the new benefits provided With this latest release we offer solutions to critical feature gaps that were identified previously Please refer to the Changelog https github com hashicorp vault blob main CHANGELOG md 1130 rc1 within the Vault release for further information on product improvements including a comprehensive list of bug fixes Some of these enhancements and changes in this release include the following PKI improvements Cross Cluster PKI Certificate Revocation Introducing a new unified OCSP responder and CRL builder that enables a certificate revocations and CRL view across clusters for a given PKI mount PKI UI Beta New UI introducing cross signing flow overview page roles and keys view Health Checks Provide a health overview of PKI mounts for proactive actions and troubleshooting Command Line Simplified CLI to discover rotate issuers and related commands for PKI mounts Azure Auth Improvements Rotate root support Add the ability to rotate the root account s client secret defined in the auth method s configuration via the new rotate root endpoint Managed Identities authentication The auth method now allows any Azure resource that supports managed identities to authenticate with Vault VMSS Flex authentication Add support for Virtual Machine Scale Set VMSS Flex authentication GCP Secrets Impersonated Account Support Add support for GCP service account impersonation allowing callers to generate a GCP access token without requiring Vault to store or retrieve a GCP service account key for each role Managed Keys in Transit Engine Support for offloading Transit Key operations to HSMs external KMS KMIP Secret Engine Enhancements Implemented Asymmetric Key Lifecycle Server and Advanced Cryptographic Server profiles Added support for RSA keys and operations such as MAC MAC Verify Sign Sign Verify RNG Seed and RNG Retrieve Vault as a SSM Support is planned for an upcoming Vault PKCS 11 Provider version to include mechanisms for encryption decryption signing and signature verification for AES and RSA keys Replication enterprise We fixed a bug that could cause a cluster to wind up in a permanent merkle diff merkle sync loop and never enter stream wals particularly in cases of high write loads on the primary cluster Share Secrets in Independent Namespaces enterprise You can now add users from namespaces outside a namespace hierarchy to a group in a given namespace hierarchy For Vault Agent you can now grant it access to secrets outside the namespace where it authenticated and reduce the number of Agents you need to run User Lockout Vault now supports configuration to lock out users when they have consecutive failed login attempts This feature is enabled by default in 1 13 for the userpass ldap and approle auth methods Event System Alpha Vault has a new experimental event system Events are currently only generated on writes to the KV secrets engine but external plugins can also be updated to start generating events Kubernetes authentication plugin bug fix Ensures a consistent TLS configuration for all k8s API requests This fixes a bug where it was possible for the http Client s Transport to be missing the necessary root CAs to ensure that all TLS connections between the auth engine and the Kubernetes API were validated against the configured set of CA certificates Kubernetes Secretes Engine on Vault UI Introducing Kubernetes secret engine support on the UI Client Count UI improvements Combining current month and previous history into one dashboard OCSP Support in the TLS Certificate Auth Method The auth method now can check for revoked certificates using the OCSP protocol UI Wizard removal The UI Wizard has been removed from the UI since the information was occasionally out of date and did not align with the latest changes A new and enhanced UI experience is planned in a future release Vault Agent improvements Auto auth introduced token file method which reads an existing token from a file The token file method is designed for development and testing It is not suitable for production deployment Listeners for the Vault Agent can define a role set to metrics only so that a service can be configured to listen on a particular port to collect metrics Vault Agent can read configurations from multiple files Users can specify the log file path using the log file command flag or VAULT LOG FILE environment variable This is particularly useful when Vault Agent is running as a Windows service OpenAPI based Go NET Client Libraries Public Beta Use the new Go NET client libraries to interact with the Vault API from your applications OpenAPI based Go client library https github com hashicorp vault client go OpenAPI based NET client library https github com hashicorp vault client dotnet Known issues When Vault is configured without a TLS certificate on the TCP listener the Vault UI may throw an error that blocks you from performing operational tasks The error message Q randomUUID is not a function Note Refer to this Knowledge Base article https support hashicorp com hc en us articles 14512496697875 for more details and a workaround Note The fix for this UI issue is coming in the Vault 1 13 1 release include perf standby token create forwarding failure mdx include known issues update primary data loss mdx include known issues internal error namespace missing policy mdx include known issues ephemeral loggers memory leak mdx include known issues sublogger levels unchanged on reload mdx include known issues expiration metrics fatal error mdx include known issues perf secondary many mounts deadlock mdx include known issues 1 13 reload census panic standby mdx Feature deprecations and EOL Please refer to the Deprecation Plans and Notice vault docs deprecation page for up to date information on feature deprecations and plans A Feature Deprecation FAQ vault docs deprecation faq page addresses questions about decisions made about Vault feature deprecations |
vault Software Release Date November 19 2021 layout docs Vault 1 9 0 release notes page title 1 9 0 This page contains release notes for Vault 1 9 0 | ---
layout: docs
page_title: 1.9.0
description: |-
This page contains release notes for Vault 1.9.0.
---
# Vault 1.9.0 release notes
**Software Release Date**: November 19, 2021
**Summary**: This document captures major updates as part of Vault release 1.9.0, including new features, breaking changes, enhancements, deprecation, and EOL plans. Refer to the [Changelog](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md) for additional changes made within the Vault 1.9 release.
## New features
This section describes the new features introduced as part of Vault 1.9.0.
### Client count improvements
Several improvements to client count were made to help customers better track and identify client attribution and reduce overcomputing.
#### Improved computation of client counts and usability within the usage metrics UI
The improvements made include the following:
* New logic enables de-duplication of non-entity tokens, thereby reducing their contribution towards the client count
* New logic allows entities to be created for local auth mounts, thereby eliminating non-entity-tokens being issued by the local auth mounts and reducing the overall client count
* Eliminates root tokens from the client count aggregate
* Displays client counts per namespace (top ten, descending order by attribution) in the usage metrics UI with the ability to export data for all namespaces
* Displays clients earlier than a month in the usage metrics UI (within ten minutes since initiation of computation)
### Advanced data protection module (ADP) enhancements
The following section provides details about the ADP module features added in this release.
#### Advanced I/O handling for tranform FPE (ADP-Transform)
Users of the Format Preserving Encryption (FPE) feature of ADP Transform will now benefit from increased flexibility with regards to formatting the input and output of their data. [Transformation templates](/vault/tutorials/adp/transform#advanced-handling) are receiving two new fields- **encode_format** and **decode_formats** -that allow users to specify and format individual [capturing groups](https://www.regular-expressions.info/refcapture.html) within the regular expressions that define their formats.
#### MS SQL TDE (ADP-KM)
We added support to Vault Enterprise for customers who want Vault to manage encryption keys for Transparent Data Encryption on MSSQL servers.
#### Key management Secrets(KMS) engine - GCP (ADP-KM)
The [KMS Engine for GCP](/vault/docs/secrets/gcpkms) provides key management via the Google Cloud KMS to assist with automating many GCP key management functions.
## Other features and enhancements
This section describes other features and enhancements introduced as part of the Vault 1.9 release.
### Vault agent improvements
Improvements were made to the Vault Agent Cache to ensure that [consul-template is always routed through the Vault Agent cache](/vault/docs/agent/template), therefore, eliminating the need for listeners to be defined in the Vault Agent for just templating.
### Customized username generation for database dynamic credentials
This feature enables customization of username for database dynamic credentials. This feature helps customers better manage and correlate usernames for various actions such as troubleshooting, etc. Vault 1.9 supports Postgres, MSSQL, MySQL, Oracle, MongoDB.
### Customizable HTTP headers for Vault
This feature allows security operators to configure [custom response headers](/vault/docs/configuration/listener/tcp) to HTTP root path (`/`) and API endpoints (`/v1/*`), in addition to the previously supported UI paths through the server HCL configuration file.
### Support for IBM s390X CPU architecture
This feature adds support for Vault to run on the IBM s390x architecture via the [equivalent binary](https://releases.hashicorp.com/vault/1.9.0+ent/).
### Namespace API lock
This [feature](/vault/docs/concepts/namespace-api-lock) allows namespace administrators to flexibly control operations such as locking APIs from child namespaces to which they have access. This enables them to restrict access to their domain in a multi-tenant environment and perform break-glass procedures in times of emergency to protect a cluster from within their child namespace.
### Vault terraform provider v3
We have upgraded the [Vault Terraform Provider](https://registry.terraform.io/providers/hashicorp/vault/latest/docs) to the latest version of the [Terraform Plugin SDKv2](/terraform/plugin/sdkv2) to leverage new features.
### Azure secrets engine
The following enhancement are included:
* Added `use_microsoft_graph_api` configuration parameter is added to use with Microsoft Graph API. We are targeting to remove Azure Active Directory API by [June 30, 2022](https://docs.microsoft.com/en-us/graph/migrate-azure-ad-graph-overview).
* Rotate root API is now available to rotate client_secret immediately after configuration.
### Customized metadata for KV
This [enhancement](/vault/api-docs/secret/kv/kv-v2) provides the ability to set version-agnostic custom key metadata for Vault KVv2 secrets via a metadata endpoint. This custom metadata is also visible in the UI.
## UI enhancements
### Expanding the UI for more DB secrets engines
We have been adding support for DB secrets engines in the UI over the past few releases. In the Vault 1.9 release, we have added support for [Oracle](/vault/docs/secrets/databases/oracle) and [ElasticSearch](/vault/docs/secrets/databases/elasticdb) and [PostgresSQL](/vault/docs/secrets/databases/postgresql) database secrets engines in the UI.
### PKI certificate metadata
The [PKI Secrets Engine](/vault/docs/secrets/pki) now displays additional PKI certificate metadata in the UI, such as date issued, date of expiry, serial number, and subject/name.
## Tech preview features
### KV secrets engine v2 patch operations
This feature provides a more streamlined method for managing [KV v2 secrets](/vault/api-docs/secret/kv/kv-v2), enabling customers to better maintain least privilege security in automated environments. This feature allows performing partial updates to KV v2 secrets without requiring to read the full KV secret's key/value pairs.
### Vault as an OIDC provider
Vault can now act as an OIDC Provider so applications can leverage the pre-existing [Vault identities](/vault/api-docs/secret/identity) to authenticate into applications.
## Breaking changes
The following section details breaking changes introduced in Vault 1.9.
### Removal of HTTP request counters
In Vault 1.9, the [internal HTTP Request count API](/vault/api-docs/system/internal-counters#http-requests) was removed from the product. Calls to the endpoint will result in a **404 error** with a message stating that functionality on this path has been removed.
Please refer to the [upgrade guide](/vault/docs/upgrading/upgrade-to-1.9.x) for more information.
As called out in the documentation, Vault does not make backwards compatible guarantees on internal APIs (those prefaced with `sys/internal`). They are subject to change and may disappear without notice.
## Feature deprecations and EOL
Please refer to the [Deprecation Plans and Notice](/vault/docs/deprecation) page for up-to-date information on feature deprecations and plans. An [FAQ](/vault/docs/deprecation/faq) page is also available to address questions concerning decisions made about Vault feature deprecations. | vault | layout docs page title 1 9 0 description This page contains release notes for Vault 1 9 0 Vault 1 9 0 release notes Software Release Date November 19 2021 Summary This document captures major updates as part of Vault release 1 9 0 including new features breaking changes enhancements deprecation and EOL plans Refer to the Changelog https github com hashicorp vault blob main CHANGELOG md for additional changes made within the Vault 1 9 release New features This section describes the new features introduced as part of Vault 1 9 0 Client count improvements Several improvements to client count were made to help customers better track and identify client attribution and reduce overcomputing Improved computation of client counts and usability within the usage metrics UI The improvements made include the following New logic enables de duplication of non entity tokens thereby reducing their contribution towards the client count New logic allows entities to be created for local auth mounts thereby eliminating non entity tokens being issued by the local auth mounts and reducing the overall client count Eliminates root tokens from the client count aggregate Displays client counts per namespace top ten descending order by attribution in the usage metrics UI with the ability to export data for all namespaces Displays clients earlier than a month in the usage metrics UI within ten minutes since initiation of computation Advanced data protection module ADP enhancements The following section provides details about the ADP module features added in this release Advanced I O handling for tranform FPE ADP Transform Users of the Format Preserving Encryption FPE feature of ADP Transform will now benefit from increased flexibility with regards to formatting the input and output of their data Transformation templates vault tutorials adp transform advanced handling are receiving two new fields encode format and decode formats that allow users to specify and format individual capturing groups https www regular expressions info refcapture html within the regular expressions that define their formats MS SQL TDE ADP KM We added support to Vault Enterprise for customers who want Vault to manage encryption keys for Transparent Data Encryption on MSSQL servers Key management Secrets KMS engine GCP ADP KM The KMS Engine for GCP vault docs secrets gcpkms provides key management via the Google Cloud KMS to assist with automating many GCP key management functions Other features and enhancements This section describes other features and enhancements introduced as part of the Vault 1 9 release Vault agent improvements Improvements were made to the Vault Agent Cache to ensure that consul template is always routed through the Vault Agent cache vault docs agent template therefore eliminating the need for listeners to be defined in the Vault Agent for just templating Customized username generation for database dynamic credentials This feature enables customization of username for database dynamic credentials This feature helps customers better manage and correlate usernames for various actions such as troubleshooting etc Vault 1 9 supports Postgres MSSQL MySQL Oracle MongoDB Customizable HTTP headers for Vault This feature allows security operators to configure custom response headers vault docs configuration listener tcp to HTTP root path and API endpoints v1 in addition to the previously supported UI paths through the server HCL configuration file Support for IBM s390X CPU architecture This feature adds support for Vault to run on the IBM s390x architecture via the equivalent binary https releases hashicorp com vault 1 9 0 ent Namespace API lock This feature vault docs concepts namespace api lock allows namespace administrators to flexibly control operations such as locking APIs from child namespaces to which they have access This enables them to restrict access to their domain in a multi tenant environment and perform break glass procedures in times of emergency to protect a cluster from within their child namespace Vault terraform provider v3 We have upgraded the Vault Terraform Provider https registry terraform io providers hashicorp vault latest docs to the latest version of the Terraform Plugin SDKv2 terraform plugin sdkv2 to leverage new features Azure secrets engine The following enhancement are included Added use microsoft graph api configuration parameter is added to use with Microsoft Graph API We are targeting to remove Azure Active Directory API by June 30 2022 https docs microsoft com en us graph migrate azure ad graph overview Rotate root API is now available to rotate client secret immediately after configuration Customized metadata for KV This enhancement vault api docs secret kv kv v2 provides the ability to set version agnostic custom key metadata for Vault KVv2 secrets via a metadata endpoint This custom metadata is also visible in the UI UI enhancements Expanding the UI for more DB secrets engines We have been adding support for DB secrets engines in the UI over the past few releases In the Vault 1 9 release we have added support for Oracle vault docs secrets databases oracle and ElasticSearch vault docs secrets databases elasticdb and PostgresSQL vault docs secrets databases postgresql database secrets engines in the UI PKI certificate metadata The PKI Secrets Engine vault docs secrets pki now displays additional PKI certificate metadata in the UI such as date issued date of expiry serial number and subject name Tech preview features KV secrets engine v2 patch operations This feature provides a more streamlined method for managing KV v2 secrets vault api docs secret kv kv v2 enabling customers to better maintain least privilege security in automated environments This feature allows performing partial updates to KV v2 secrets without requiring to read the full KV secret s key value pairs Vault as an OIDC provider Vault can now act as an OIDC Provider so applications can leverage the pre existing Vault identities vault api docs secret identity to authenticate into applications Breaking changes The following section details breaking changes introduced in Vault 1 9 Removal of HTTP request counters In Vault 1 9 the internal HTTP Request count API vault api docs system internal counters http requests was removed from the product Calls to the endpoint will result in a 404 error with a message stating that functionality on this path has been removed Please refer to the upgrade guide vault docs upgrading upgrade to 1 9 x for more information As called out in the documentation Vault does not make backwards compatible guarantees on internal APIs those prefaced with sys internal They are subject to change and may disappear without notice Feature deprecations and EOL Please refer to the Deprecation Plans and Notice vault docs deprecation page for up to date information on feature deprecations and plans An FAQ vault docs deprecation faq page is also available to address questions concerning decisions made about Vault feature deprecations |
vault This page contains release notes for Vault 1 11 0 Vault 1 11 0 release notes Software Release date June 21 2022 page title 1 11 0 layout docs | ---
layout: docs
page_title: 1.11.0
description: |-
This page contains release notes for Vault 1.11.0
---
# Vault 1.11.0 release notes
**Software Release date:** June 21, 2022
**Summary:** Vault Release 1.11.0 offers features and enhancements that improve the user experience while closing the loop on key issues previously encountered by our customers. We are providing a summary of these improvements in these release notes.
We encourage you to upgrade to the latest release to take advantage of the new benefits that we are providing. With this latest release, we offer solutions to critical feature gaps that have been identified previously. For further information on product improvements, including a comprehensive list of bug fixes, please refer to the [Changelog](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md) within the Vault release.
Some of these enhancements and changes in this release include:
- Vault Consul secrets engine provides a templating policy to allow node and service identities to be set on the Consul token creation
- Snowflake secrets engine added a key/pair-based authentication
- Vault adds a Kubernetes secrets engine to allow creating dynamic k8s service accounts
- ADP-Transform extends its functionality by adding a convergent tokenization mode and a tokenization lookup
- ADP-KM adds four new operations
- Client count tooling improvements to help understand the attribution of clients better
- Integration storage autopilot improvements include auto upgrade and redundancy zones
- Plugin Multiplexing support is extended to secret and auth plugins, allowing them to be managed more efficiently with a single process
## New features
This section describes the new features introduced as part of Vault 1.11.0.
### Configure GCP auth to target non-public good API addresses
The GCP auth method only allows for public API endpoints to be configured for authentication purposes. Workloads running in GCP that do not have external internet access need the ability to authenticate using [Private Google Access](https://cloud.google.com/vpc/docs/private-google-access#pga). In Vault 1.11.0, we allow for customization of certain service endpoints. For more information, refer to the [GCP auth method](/vault/api-docs/auth/gcp#custom_endpoint) documentation.
### Support for key/pair based authentication for snowflake
In Vault 1.11.0, the Snowflake Database Engine supports an additional credential type that can be generated. For users not wanting to rely on the standard user/pass authentication to Snowflake, Vault can now dynamically generate RSA key pairs that allow users to authenticate into Snowflake. For more information, refer to the [Snowflake Database Secrets Engine](/vault/docs/secrets/databases/snowflake) and [Database Secrets Engine (API)](/vault/api-docs/secret/databases) documentation.
### Dynamic kubernetes service account secrets
Kubernetes service accounts must be manually generated and passed to a Kubernetes configuration file or the command line using a CLI tool such as kubectl to interact with Kubernetes clusters. With this method, service account credentials, which contain static secrets, can be exposed and would require a periodic manual rotation. To address this issue, we now support generating short-lived dynamic service accounts and associate role bindings to specific Kubernetes namespaces. For more information, refer to the [Kubernetes Auth Method](/vault/docs/auth/kubernetes) and [Kubernetes Auth Method (API)](/vault/api-docs/auth/kubernetes) documentation.
### New KV secrets engine (v2) utilities
The KV version 2 secrets engine now includes a set of utilities and enhancements for easier retrieval of key-value secrets and metadata. This includes:
* New optional Vault CLI mount flag (`vault kv get -mount=secret foo`).
* New flag to output a sample policy in HCL (`-output-policy`) for any Vault CLI command.
* New KV convenience/helper methods (GET and PUT) added to the Go client library.
For more details, refer to the [Version Key Value Secrets Engine](/vault/tutorials/secrets-management/versioned-kv) tutorial.
### Support for node identity and service identity for Vault consul secrets engine
Within the Consul secrets engine, practitioners writing a Vault role can specify node-identity or service-identity. You can also specify multiples of each identity on a Vault role. For more information, refer to the [Consul Secrets Engine](/vault/docs/secrets/consul) and [Consul Secrets Engine (API)](/vault/api-docs/secret/consul) documentation.
### Autopilot (Vault enterprise)
Vault release 1.7 introduced the Autopilot feature to Integrated Storage. In this release, new Autopilot features are added to Vault Enterprise to perform seamless automatic upgrades and support redundancy zones for improved cluster resiliency. Refer to the [autopilot endpoint](/vault/api-docs/system/storage/raftautopilot#sys-storage-raft-autopilot), [operator raft](/vault/docs/commands/operator/raft), [Autopilot](/vault/docs/concepts/integrated-storage/autopilot), [Automated Upgrades](/vault/docs/enterprise/automated-upgrades), and [Redundancy Zones](/vault/docs/enterprise/redundancy-zones) documentation for more information.
## Other features and enhancements
This section describes other features and enhancements introduced as part of the Vault 1.11.0 release.
### Import externally-generated keys into transit secrets engine
Historically, Vault has only allowed the Transit secrets engine to utilize keys that were created by Vault itself. In this release, we have introduced an import feature for the Transit secrets engine that enables individuals to bring externally-generated encryption keys into a Transit keyring. These keys can then be used identically to internally-generated Transit keys.
### Improved CA rotation
PKI secrets engine users have sought a way to rotate root or intermediate CAs without causing service interruptions to any entities referencing them. Vault can now create the newly rotated PKI key pairs for servicing new certificates at the same path as the pre-existing keypair. This allows operators to gradually transition entities over to the new root certificate while the old is still active.
### Client count tooling improvements
We have made the following improvements to the Client Count tooling:
* Provide the ability to export the unique clients that contribute to the client count aggregate for the selected billing period via a new [activity export API endpoint](/vault/api-docs/system/internal-counters#activity-export). This feature is available in tech preview mode.
* Provide the ability to view changes to client counts month over month in the UI.
### MFA enhancements
Vault 1.10 introduced [Login MFA](/vault/docs/auth/login-mfa) support for Vault Community Edition. In this release, we included additional enhancements to the Login MFA feature by introducing the ability to configure Login MFA via the UI and providing an enhanced TOTP configuration experience via the QR code scan.
### Vault agent: support for using an existing valid certificate upon re-authentication
Enhancements have been made to the Vault Agent to support the parsing of a certificate that's been fetched. A new certificate will only be fetched upon a re-authentication if the certificate's lifetime has expired. This enhancement drastically reduces the resource overhead that Vault Agent users often experience due to over-fetching certificates.
### Namespace enhancements for Vault terraform
With Terraform Vault provider v3.7.0, we have made enhancements where it’s now possible to specify the namespace directly within the resource or data source. All resource or data source-specific namespaces are relative to their provider’s configured namespace. This enhancement encourages a better workflow for namespaces, reduces execution time when handling failures of a Terraform plan, and eases the burden on system resources such as memory, CPU, etc.
### ADP-Tranform enhancements
Two new enhancements were made to the Transform secrets engine. The first is Convergent Tokenization, which allows tokenization transformations to be configured as _convergent_. When enabled, this guarantees that tokenizing a given plaintext and expiration more than once always results in the same token value being produced. Please refer to the [Convergent Tokenization](/vault/docs/secrets/transform/tokenization#convergence) document for more information. Token Lookup allows you to look up the value of a token given its plaintext. While this is not typically encouraged from a security perspective, it may be necessary for particular circumstances that require this operation. Note that token lookup is only supported when convergence is enabled. For more information on the endpoint, refer to the [Lookup Token](/vault/api-docs/secret/transform#lookup-token) documentation.
### KMIP support for import, query, encryption and decryption
Previously, KMIP did not support certain operations such as import, decrypt, encrypt, and query. These operations are now supported. For a complete list of supported KMIP operations, please refer to the [Supported KMIP Operations](/vault/docs/secrets/kmip) documentation.
@include 'pgx-params.mdx'
## Known issues
When you use Vault 1.11.0+ as a Consul's Connect CA, you may encounter an issue generating the leaf certificates ([GH-15525](https://github.com/hashicorp/consul/pull/15525)). Upgrade your [Consul version that includes the fix](https://support.hashicorp.com/hc/en-us/articles/11308460105491#01GMC24E6PPGXMRX8DMT4HZYTW) to avoid running into this problem.
-> Refer to this [Knowledge Base article](https://support.hashicorp.com/hc/en-us/articles/11308460105491) for more details.
## Feature deprecations and EOL
Please refer to the [Deprecation Plans and Notice](/vault/docs/deprecation) page for up-to-date information on feature deprecations and plans. An [Feature Deprecation FAQ](/vault/docs/deprecation/faq) page is also available to address questions concerning decisions made about Vault feature deprecations. | vault | layout docs page title 1 11 0 description This page contains release notes for Vault 1 11 0 Vault 1 11 0 release notes Software Release date June 21 2022 Summary Vault Release 1 11 0 offers features and enhancements that improve the user experience while closing the loop on key issues previously encountered by our customers We are providing a summary of these improvements in these release notes We encourage you to upgrade to the latest release to take advantage of the new benefits that we are providing With this latest release we offer solutions to critical feature gaps that have been identified previously For further information on product improvements including a comprehensive list of bug fixes please refer to the Changelog https github com hashicorp vault blob main CHANGELOG md within the Vault release Some of these enhancements and changes in this release include Vault Consul secrets engine provides a templating policy to allow node and service identities to be set on the Consul token creation Snowflake secrets engine added a key pair based authentication Vault adds a Kubernetes secrets engine to allow creating dynamic k8s service accounts ADP Transform extends its functionality by adding a convergent tokenization mode and a tokenization lookup ADP KM adds four new operations Client count tooling improvements to help understand the attribution of clients better Integration storage autopilot improvements include auto upgrade and redundancy zones Plugin Multiplexing support is extended to secret and auth plugins allowing them to be managed more efficiently with a single process New features This section describes the new features introduced as part of Vault 1 11 0 Configure GCP auth to target non public good API addresses The GCP auth method only allows for public API endpoints to be configured for authentication purposes Workloads running in GCP that do not have external internet access need the ability to authenticate using Private Google Access https cloud google com vpc docs private google access pga In Vault 1 11 0 we allow for customization of certain service endpoints For more information refer to the GCP auth method vault api docs auth gcp custom endpoint documentation Support for key pair based authentication for snowflake In Vault 1 11 0 the Snowflake Database Engine supports an additional credential type that can be generated For users not wanting to rely on the standard user pass authentication to Snowflake Vault can now dynamically generate RSA key pairs that allow users to authenticate into Snowflake For more information refer to the Snowflake Database Secrets Engine vault docs secrets databases snowflake and Database Secrets Engine API vault api docs secret databases documentation Dynamic kubernetes service account secrets Kubernetes service accounts must be manually generated and passed to a Kubernetes configuration file or the command line using a CLI tool such as kubectl to interact with Kubernetes clusters With this method service account credentials which contain static secrets can be exposed and would require a periodic manual rotation To address this issue we now support generating short lived dynamic service accounts and associate role bindings to specific Kubernetes namespaces For more information refer to the Kubernetes Auth Method vault docs auth kubernetes and Kubernetes Auth Method API vault api docs auth kubernetes documentation New KV secrets engine v2 utilities The KV version 2 secrets engine now includes a set of utilities and enhancements for easier retrieval of key value secrets and metadata This includes New optional Vault CLI mount flag vault kv get mount secret foo New flag to output a sample policy in HCL output policy for any Vault CLI command New KV convenience helper methods GET and PUT added to the Go client library For more details refer to the Version Key Value Secrets Engine vault tutorials secrets management versioned kv tutorial Support for node identity and service identity for Vault consul secrets engine Within the Consul secrets engine practitioners writing a Vault role can specify node identity or service identity You can also specify multiples of each identity on a Vault role For more information refer to the Consul Secrets Engine vault docs secrets consul and Consul Secrets Engine API vault api docs secret consul documentation Autopilot Vault enterprise Vault release 1 7 introduced the Autopilot feature to Integrated Storage In this release new Autopilot features are added to Vault Enterprise to perform seamless automatic upgrades and support redundancy zones for improved cluster resiliency Refer to the autopilot endpoint vault api docs system storage raftautopilot sys storage raft autopilot operator raft vault docs commands operator raft Autopilot vault docs concepts integrated storage autopilot Automated Upgrades vault docs enterprise automated upgrades and Redundancy Zones vault docs enterprise redundancy zones documentation for more information Other features and enhancements This section describes other features and enhancements introduced as part of the Vault 1 11 0 release Import externally generated keys into transit secrets engine Historically Vault has only allowed the Transit secrets engine to utilize keys that were created by Vault itself In this release we have introduced an import feature for the Transit secrets engine that enables individuals to bring externally generated encryption keys into a Transit keyring These keys can then be used identically to internally generated Transit keys Improved CA rotation PKI secrets engine users have sought a way to rotate root or intermediate CAs without causing service interruptions to any entities referencing them Vault can now create the newly rotated PKI key pairs for servicing new certificates at the same path as the pre existing keypair This allows operators to gradually transition entities over to the new root certificate while the old is still active Client count tooling improvements We have made the following improvements to the Client Count tooling Provide the ability to export the unique clients that contribute to the client count aggregate for the selected billing period via a new activity export API endpoint vault api docs system internal counters activity export This feature is available in tech preview mode Provide the ability to view changes to client counts month over month in the UI MFA enhancements Vault 1 10 introduced Login MFA vault docs auth login mfa support for Vault Community Edition In this release we included additional enhancements to the Login MFA feature by introducing the ability to configure Login MFA via the UI and providing an enhanced TOTP configuration experience via the QR code scan Vault agent support for using an existing valid certificate upon re authentication Enhancements have been made to the Vault Agent to support the parsing of a certificate that s been fetched A new certificate will only be fetched upon a re authentication if the certificate s lifetime has expired This enhancement drastically reduces the resource overhead that Vault Agent users often experience due to over fetching certificates Namespace enhancements for Vault terraform With Terraform Vault provider v3 7 0 we have made enhancements where it s now possible to specify the namespace directly within the resource or data source All resource or data source specific namespaces are relative to their provider s configured namespace This enhancement encourages a better workflow for namespaces reduces execution time when handling failures of a Terraform plan and eases the burden on system resources such as memory CPU etc ADP Tranform enhancements Two new enhancements were made to the Transform secrets engine The first is Convergent Tokenization which allows tokenization transformations to be configured as convergent When enabled this guarantees that tokenizing a given plaintext and expiration more than once always results in the same token value being produced Please refer to the Convergent Tokenization vault docs secrets transform tokenization convergence document for more information Token Lookup allows you to look up the value of a token given its plaintext While this is not typically encouraged from a security perspective it may be necessary for particular circumstances that require this operation Note that token lookup is only supported when convergence is enabled For more information on the endpoint refer to the Lookup Token vault api docs secret transform lookup token documentation KMIP support for import query encryption and decryption Previously KMIP did not support certain operations such as import decrypt encrypt and query These operations are now supported For a complete list of supported KMIP operations please refer to the Supported KMIP Operations vault docs secrets kmip documentation include pgx params mdx Known issues When you use Vault 1 11 0 as a Consul s Connect CA you may encounter an issue generating the leaf certificates GH 15525 https github com hashicorp consul pull 15525 Upgrade your Consul version that includes the fix https support hashicorp com hc en us articles 11308460105491 01GMC24E6PPGXMRX8DMT4HZYTW to avoid running into this problem Refer to this Knowledge Base article https support hashicorp com hc en us articles 11308460105491 for more details Feature deprecations and EOL Please refer to the Deprecation Plans and Notice vault docs deprecation page for up to date information on feature deprecations and plans An Feature Deprecation FAQ vault docs deprecation faq page is also available to address questions concerning decisions made about Vault feature deprecations |
vault page title 1 15 0 release notes Vault 1 15 0 release notes layout docs GA date 2023 09 27 Key updates for Vault 1 15 0 | ---
layout: docs
page_title: "1.15.0 release notes"
description: |-
Key updates for Vault 1.15.0
---
# Vault 1.15.0 release notes
**GA date:** 2023-09-27
@include 'release-notes/intro.mdx'
## Known issues and breaking changes
| Version | Issue |
|------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.15.0+ | [Vault no longer reports rollback metrics by mountpoint](/vault/docs/upgrading/upgrade-to-1.15.x#rollback-metrics) |
| 1.15.0 | [Panic in AWS auth method during IAM-based login](/vault/docs/upgrading/upgrade-to-1.15.x#panic-in-aws-auth-method-during-iam-based-login) |
| 1.15.0+ | [UI Collapsed navbar does not allow certain click events](/vault/docs/upgrading/upgrade-to-1.15.x#ui-collapsed-navbar) |
| 1.15 | [Vault file audit devices do not honor SIGHUP signal to reload](/vault/docs/upgrading/upgrade-to-1.15.x#file-audit-devices-do-not-honor-sighup-signal-to-reload) |
| 1.15.0 - 1.15.1 | [Vault storing references to ephemeral sub-loggers leading to unbounded memory consumption](/vault/docs/upgrading/upgrade-to-1.15.x#vault-is-storing-references-to-ephemeral-sub-loggers-leading-to-unbounded-memory-consumption) |
| 1.15.0 - 1.15.1 | [Internal error when vault policy in namespace does not exist](/vault/docs/upgrading/upgrade-to-1.15.x#internal-error-when-vault-policy-in-namespace-does-not-exist) |
| 1.15.0+ | [Sublogger levels not adjusted on reload](/vault/docs/upgrading/upgrade-to-1.15.x#sublogger-levels-unchanged-on-reload) |
| 1.15.0+ | [URL change for KV v2 plugin](/vault/docs/upgrading/upgrade-to-1.15.x#kv2-url-change) |
| 1.15.1 | [Fatal error during expiration metrics gathering causing Vault crash](/vault/docs/upgrading/upgrade-to-1.15.x#fatal-error-during-expiration-metrics-gathering-causing-vault-crash) |
| 1.15.0 - 1.15.4 | [Audit devices could log raw data despite configuration](/vault/docs/upgrading/upgrade-to-1.15.x#audit-devices-could-log-raw-data-despite-configuration) |
| 1.15.5 | [Unable to rotate LDAP credentials](/vault/docs/upgrading/upgrade-to-1.15.x#unable-to-rotate-ldap-credentials) |
| 1.15.0 - 1.15.5 | [Deadlock can occur on performance secondary clusters with many mounts](/vault/docs/upgrading/upgrade-to-1.15.x#deadlock-can-occur-on-performance-secondary-clusters-with-many-mounts) |
| 1.15.0 - 1.15.5 | [Audit fails to recover from panics when formatting audit entries](/vault/docs/upgrading/upgrade-to-1.15.x#audit-fails-to-recover-from-panics-when-formatting-audit-entries) |
| 1.15.0 - 1.15.7 | [Vault Enterprise performance standby nodes audit all request headers regardless of settings](/vault/docs/upgrading/upgrade-to-1.15.x#vault-enterprise-performance-standby-nodes-audit-all-request-headers) |
| 1.15.3 - 1.15.9 | [New nodes added by autopilot upgrades provisioned with the wrong version](/vault/docs/upgrading/upgrade-to-1.15.x#new-nodes-added-by-autopilot-upgrades-provisioned-with-the-wrong-version) |
| 1.15.8 - 1.15.9 | [Autopilot upgrade for Vault Enterprise fails](/vault/docs/upgrading/upgrade-to-1.15.x#autopilot) |
| 1.15.0 - 1.15.11 | [Listener stops listening on untrusted upstream connection with particular config settings](/vault/docs/upgrading/upgrade-to-1.15.x#listener-proxy-protocol-config) |
| 0.7.0+ | [Duplicate identity groups created](/vault/docs/upgrading/upgrade-to-1.15.x#duplicate-identity-groups-created-when-concurrent-requests-sent-to-the-primary-and-pr-secondary-cluster) | |
| Known Issue (0.7.0+) | [Manual entity merges fail](/vault/docs/upgrading/upgrade-to-1.15.x#manual-entity-merges-sent-to-a-pr-secondary-cluster-are-not-persisted-to-storage)
## Vault companion updates
Companion updates are Vault updates that live outside the main Vault binary.
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style=>
Vault Secrets Operator
</td>
<td style=>GA</td>
<td style=>
Run the Vault Secrets Operator (v0.3.0) on Red Hat OpenShift.
<br /><br />
Learn more: <a href="/vault/docs/platform/k8s/vso/openshift">Vault Secrets Operator</a>
</td>
</tr>
</tbody>
</table>
## Core updates
Follow the learn more links for more information, or browse the list of
[Vault tutorials updated to highlight changes for the most recent GA release](/vault/tutorials/new-release).
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td rowSpan={2} style=>
Vault Agent
</td>
<td style=>ENHANCED</td>
<td style=>
Updated to use the latest Azure SDK version and Workload Identity
Federation (WIF).
<br /><br />
Learn more:
<a href="/vault/docs/agent-and-proxy/agent">What is Vault Agent?</a>
</td>
</tr>
<tr>
<td style=>GA</td>
<td style=>
Fetch secrets directly into your application as environment variables.
<br /><br />
Learn more: <a href="/vault/docs/agent-and-proxy/agent/process-supervisor">Process Supervisor Mode</a>
</td>
</tr>
<tr>
<td style=>
External plugins
</td>
<td style=>BETA</td>
<td style=>
Run external plugins in their own container with native container platform
controls.
<br /><br />
Learn more: <a href="/vault/docs/plugins/containerized-plugins">Containerize Vault plugins</a>
</td>
</tr>
<tr>
<td style=>
Eventing
</td>
<td style=>BETA</td>
<td style=>
Subscribe to notifications for various events in Vault. Includes support
for filtering, permissions, and cluster configurations with K-V secrets.
<br /><br />
Learn more: <a href="/vault/docs/concepts/events">Events</a>
</td>
</tr>
<tr>
<td rowSpan={2} style=>
Vault GUI
</td>
<td style=>GA</td>
<td style=>
New LDAP secrets engine GUI.
<br /><br />
Learn more: <a href="/vault/docs/configuration/ui">Vault UI guide</a>
</td>
</tr>
<tr>
<td style=>ENHANCED</td>
<td style=>
• New landing page dashboard.<br />
• View secrets you have read access to under your directory.<br />
• View diffs between previous and new secret versions.<br />
• Copy and paste secret paths from the GUI to the Vault CLI or API.
<br /><br />
Learn more: <a href="/vault/docs/configuration/ui">Vault UI guide</a>
</td>
</tr>
<tr>
<td rowSpan={2} style=>
Secrets management
</td>
<td style=>GA</td>
<td style=>
Connect to Google Cloud Platform (GCP) Cloud SQL instances using native
IAM credentials.
<br /><br />
Learn more:
<a href="/vault/docs/sync/gcpsm">Google Cloud Platform Secret Manager</a>
</td>
</tr>
<tr>
<td style=>ENHANCED</td>
<td style=>
Improved TTL management for database credentials with configurable
credential rotation.
<br /><br />
Learn more: <a href="/vault/api-docs/secret">Secrets engines</a>
</td>
</tr>
</tbody>
</table>
## Enterprise updates
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style=>
Secrets syncing
</td>
<td style=>BETA</td>
<td style=>
Sync Key Value (KV) v2 data between Vault and secrets managers from AWS,
Azure, Google Cloud Platform (GCP), GitHub, and Vercel.
<br /><br />
Learn more: <a href="/vault/docs/sync">Secrets Sync</a>
</td>
</tr>
<tr>
<td style=>
Public Key Infrastructure (PKI)
</td>
<td style=>GA</td>
<td style=>
Control Vault PKI issued certificates with the Certificate Issuance
External Policy Service (CIEPS) to ensure consistency and compliance to
enterprise standards.
<br /><br />
Learn more: <a href="/vault/docs/secrets/pki/cieps">Certificate Issuance External Policy Service (CIEPS)</a>
</td>
</tr>
<tr>
<td style=>
Replication
</td>
<td style=>ENHANCED</td>
<td style=>
Holistic improvements to cluster replication including problem detection
and remediation.
<br /><br />
Learn more: <a href="/vault/docs/enterprise/replication">Vault Enterprise replication</a>
</td>
</tr>
<tr>
<td style=>
Seal High Availability
</td>
<td style=>BETA</td>
<td style=>
Enables Vault administrators to configure multiple KMS for seal keys to
ensure Vault availability in the event a single KMS becomes unavailable.
<br /><br />
Learn more: <a href="/vault/docs/configuration/seal/seal-ha">Seal wrap</a>
</td>
</tr>
<tr>
<td style=>
Authentication
</td>
<td style=>GA</td>
<td style=>
Authenticate to Vault with your SAML identity provider.
<br /><br />
Learn more: <a href="/vault/docs/auth/saml">SAML auth method</a>
</td>
</tr>
</tbody>
</table>
## Feature deprecations and EOL
Deprecated in 1.15 | Retired in 1.15
------------------ | ---------------
None | None
@include 'release-notes/deprecation-note.mdx' | vault | layout docs page title 1 15 0 release notes description Key updates for Vault 1 15 0 Vault 1 15 0 release notes GA date 2023 09 27 include release notes intro mdx Known issues and breaking changes Version Issue 1 15 0 Vault no longer reports rollback metrics by mountpoint vault docs upgrading upgrade to 1 15 x rollback metrics 1 15 0 Panic in AWS auth method during IAM based login vault docs upgrading upgrade to 1 15 x panic in aws auth method during iam based login 1 15 0 UI Collapsed navbar does not allow certain click events vault docs upgrading upgrade to 1 15 x ui collapsed navbar 1 15 Vault file audit devices do not honor SIGHUP signal to reload vault docs upgrading upgrade to 1 15 x file audit devices do not honor sighup signal to reload 1 15 0 1 15 1 Vault storing references to ephemeral sub loggers leading to unbounded memory consumption vault docs upgrading upgrade to 1 15 x vault is storing references to ephemeral sub loggers leading to unbounded memory consumption 1 15 0 1 15 1 Internal error when vault policy in namespace does not exist vault docs upgrading upgrade to 1 15 x internal error when vault policy in namespace does not exist 1 15 0 Sublogger levels not adjusted on reload vault docs upgrading upgrade to 1 15 x sublogger levels unchanged on reload 1 15 0 URL change for KV v2 plugin vault docs upgrading upgrade to 1 15 x kv2 url change 1 15 1 Fatal error during expiration metrics gathering causing Vault crash vault docs upgrading upgrade to 1 15 x fatal error during expiration metrics gathering causing vault crash 1 15 0 1 15 4 Audit devices could log raw data despite configuration vault docs upgrading upgrade to 1 15 x audit devices could log raw data despite configuration 1 15 5 Unable to rotate LDAP credentials vault docs upgrading upgrade to 1 15 x unable to rotate ldap credentials 1 15 0 1 15 5 Deadlock can occur on performance secondary clusters with many mounts vault docs upgrading upgrade to 1 15 x deadlock can occur on performance secondary clusters with many mounts 1 15 0 1 15 5 Audit fails to recover from panics when formatting audit entries vault docs upgrading upgrade to 1 15 x audit fails to recover from panics when formatting audit entries 1 15 0 1 15 7 Vault Enterprise performance standby nodes audit all request headers regardless of settings vault docs upgrading upgrade to 1 15 x vault enterprise performance standby nodes audit all request headers 1 15 3 1 15 9 New nodes added by autopilot upgrades provisioned with the wrong version vault docs upgrading upgrade to 1 15 x new nodes added by autopilot upgrades provisioned with the wrong version 1 15 8 1 15 9 Autopilot upgrade for Vault Enterprise fails vault docs upgrading upgrade to 1 15 x autopilot 1 15 0 1 15 11 Listener stops listening on untrusted upstream connection with particular config settings vault docs upgrading upgrade to 1 15 x listener proxy protocol config 0 7 0 Duplicate identity groups created vault docs upgrading upgrade to 1 15 x duplicate identity groups created when concurrent requests sent to the primary and pr secondary cluster Known Issue 0 7 0 Manual entity merges fail vault docs upgrading upgrade to 1 15 x manual entity merges sent to a pr secondary cluster are not persisted to storage Vault companion updates Companion updates are Vault updates that live outside the main Vault binary table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td style Vault Secrets Operator td td style GA td td style Run the Vault Secrets Operator v0 3 0 on Red Hat OpenShift br br Learn more a href vault docs platform k8s vso openshift Vault Secrets Operator a td tr tbody table Core updates Follow the learn more links for more information or browse the list of Vault tutorials updated to highlight changes for the most recent GA release vault tutorials new release table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td rowSpan 2 style Vault Agent td td style ENHANCED td td style Updated to use the latest Azure SDK version and Workload Identity Federation WIF br br Learn more nbsp a href vault docs agent and proxy agent What is Vault Agent a td tr tr td style GA td td style Fetch secrets directly into your application as environment variables br br Learn more a href vault docs agent and proxy agent process supervisor Process Supervisor Mode a td tr tr td style External plugins td td style BETA td td style Run external plugins in their own container with native container platform controls br br Learn more a href vault docs plugins containerized plugins Containerize Vault plugins a td tr tr td style Eventing td td style BETA td td style Subscribe to notifications for various events in Vault Includes support for filtering permissions and cluster configurations with K V secrets br br Learn more a href vault docs concepts events Events a td tr tr td rowSpan 2 style Vault GUI td td style GA td td style New LDAP secrets engine GUI br br Learn more a href vault docs configuration ui Vault UI guide a td tr tr td style ENHANCED td td style bull New landing page dashboard br bull View secrets you have read access to under your directory br bull View diffs between previous and new secret versions br bull Copy and paste secret paths from the GUI to the Vault CLI or API br br Learn more a href vault docs configuration ui Vault UI guide a td tr tr td rowSpan 2 style Secrets management td td style GA td td style Connect to Google Cloud Platform GCP Cloud SQL instances using native IAM credentials br br Learn more nbsp a href vault docs sync gcpsm Google Cloud Platform Secret Manager a td tr tr td style ENHANCED td td style Improved TTL management for database credentials with configurable credential rotation br br Learn more a href vault api docs secret Secrets engines a td tr tbody table Enterprise updates table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td style Secrets syncing td td style BETA td td style Sync Key Value KV v2 data between Vault and secrets managers from AWS Azure Google Cloud Platform GCP GitHub and Vercel br br Learn more a href vault docs sync Secrets Sync a td tr tr td style Public Key Infrastructure PKI td td style GA td td style Control Vault PKI issued certificates with the Certificate Issuance External Policy Service CIEPS to ensure consistency and compliance to enterprise standards br br Learn more a href vault docs secrets pki cieps Certificate Issuance External Policy Service CIEPS a td tr tr td style Replication td td style ENHANCED td td style Holistic improvements to cluster replication including problem detection and remediation br br Learn more a href vault docs enterprise replication Vault Enterprise replication a td tr tr td style Seal High Availability td td style BETA td td style Enables Vault administrators to configure multiple KMS for seal keys to ensure Vault availability in the event a single KMS becomes unavailable br br Learn more a href vault docs configuration seal seal ha Seal wrap a td tr tr td style Authentication td td style GA td td style Authenticate to Vault with your SAML identity provider br br Learn more a href vault docs auth saml SAML auth method a td tr tbody table Feature deprecations and EOL Deprecated in 1 15 Retired in 1 15 None None include release notes deprecation note mdx |
vault page title 1 17 0 release notes Key updates for Vault 1 17 0 layout docs Vault 1 17 0 release notes GA date 2024 06 12 | ---
layout: docs
page_title: "1.17.0 release notes"
description: |-
Key updates for Vault 1.17.0
---
# Vault 1.17.0 release notes
**GA date:** 2024-06-12
@include 'release-notes/intro.mdx'
## Important changes
| Change | Description |
|------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| New default (1.17) | [Allowed audit headers now have unremovable defaults](/vault/docs/upgrading/upgrade-to-1.17.x#audit-headers) |
| Opt out feature (1.17) | [PKI sign-intermediate now truncates `notAfter` field to signing issuer](/vault/docs/upgrading/upgrade-to-1.17.x#pki-truncate) |
| Beta feature deprecated (1.17) | [Request limiter deprecated](/vault/docs/upgrading/upgrade-to-1.17.x#request-limiter) |
| Known issue (1.17.0+) | [PKI OCSP GET requests can return HTTP redirect responses](/vault/docs/upgrading/upgrade-to-1.17.x#pki-ocsp) |
| Known issue (1.17.0) | [Vault Agent and Vault Proxy consume excessive amounts of CPU](/vault/docs/upgrading/upgrade-to-1.17.x#agent-proxy-cpu-1-17) |
| Known issue (1.15.8 - 1.15.9, 1.16.0 - 1.16.3) | [Autopilot upgrade for Vault Enterprise fails](/vault/docs/upgrading/upgrade-to-1.16.x#new-nodes-added-by-autopilot-upgrades-provisioned-with-the-wrong-version) |
| Known issue (1.17.0 - 1.17.2) | [Vault standby nodes not deleting removed entity-aliases from in-memory database](/vault/docs/upgrading/upgrade-to-1.17.x#dangling-entity-alias-in-memory) |
| Known issue (1.17.0 - 1.17.3) | [AWS Auth AssumeRole requires an external ID even if none is set](/vault/docs/upgrading/upgrade-to-1.17.x#aws-auth-role-configuration-requires-an-external_id) |
| Known Issue (0.7.0+) | [Duplicate identity groups created](/vault/docs/upgrading/upgrade-to-1.17.x#duplicate-identity-groups-created-when-concurrent-requests-sent-to-the-primary-and-pr-secondary-cluster) |
| Known Issue (0.7.0+) | [Manual entity merges fail](/vault/docs/upgrading/upgrade-to-1.17.x#manual-entity-merges-sent-to-a-pr-secondary-cluster-are-not-persisted-to-storage) |
| Known Issue (1.17.3-1.17.4) | [Some values in the audit logs not hmac'd properly](/vault/docs/upgrading/upgrade-to-1.17.x#client-tokens-and-token-accessors-audited-in-plaintext) |
| Known Issue (1.17.0-1.17.5) | [Cached activation flags for secrets sync on follower nodes are not updated](/vault/docs/upgrading/upgrade-to-1.17.x#cached-activation-flags-for-secrets-sync-on-follower-nodes-are-not-updated) |
| New default (1.17.9) | [Vault product usage metrics reporting](/vault/docs/upgrading/upgrade-to-1.17.x#product-usage-reporting) |
| Deprecation (1.17.9) | [`default_report_months` is deprecated for the `sys/internal/counters` API](/vault/docs/upgrading/upgrade-to-1.17.x#activity-log-changes) |
## Vault companion updates
Companion updates are Vault updates that live outside the main Vault binary.
**None**.
## Core updates
Follow the learn more links for more information, or browse the list of
[Vault tutorials updated to highlight changes for the most recent GA release](/vault/tutorials/new-release).
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style=>
Security patches
</td>
<td style=>ENHANCED</td>
<td style=>
Various security improvements to remediate varying severity and
informational findings from a 3rd party security audit.
</td>
</tr>
<tr>
<td style=>
Vault Agent and Vault Proxy self-healing tokens
</td>
<td style=>ENHANCED</td>
<td style=>
Auto-authentication avoids agent/proxy restarts and config changes by
automatically re-authenticating authN tokens to Vault.
<br /><br />
Learn more: <a href="/vault/docs/agent-and-proxy/autoauth">Vault Agent and Vault Proxy auto-auth</a>
</td>
</tr>
</tbody>
</table>
## Enterprise updates
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style=>
Adaptive overload protection
</td>
<td style=>BETA</td>
<td style=>
Prevent client requests from overwhelming a variety of server resources
that could lead to poor server availability.
<br /><br />
Learn more: <a href="/vault/docs/concepts/adaptive-overload-protection">Adaptive overload protection overview</a>
</td>
</tr>
<tr>
<td style=>
ACME Client Count
</td>
<td style=>ENHANCED</td>
<td style=>
To improve clarity around client counts, Vault now separates ACME clients
from non-entity clients.
</td>
</tr>
<tr>
<td rowSpan={2} style=>
Public Key Infrastructure (PKI)
</td>
<td style=>GA</td>
<td style=>
Automate certificate lifecycle management for IoT/EST enabled devices with
native EST protocol support.
<br /><br />
Learn more: <a href="/vault/docs/secrets/pki/est">Enrollment over Secure Transport (EST)</a> overview
</td>
</tr>
<tr>
<td style=>GA</td>
<td style=>
Submit custom metadata with certificate requests and store the additional
information in Vault for further analysis.
<br /><br />
Learn more: <a href="/vault/api-docs/secret/pki#metadata">PKI secrets engine API</a>
</td>
</tr>
<tr>
<td rowSpan={3} style=>
Resource management
</td>
<td style=>ENHANCED</td>
<td style=>
Vault now supports a greater number of namespaces and mounts for
large-scale Vault installations.
</td>
</tr>
<tr>
<td style=>GA</td>
<td style=>
Use hierarchical mount paths to organize, manage, and control access to
secret engine objects.
</td>
</tr>
<tr>
<td style=>GA</td>
<td style=>
Safely override the max entry size to set different limits for specific
storage entries that contain mount tables, auth tables and namespace
configuration data.
<br /><br />
Learn more: <a href="/vault/docs/configuration/storage/raft#max_mount_and_namespace_table_entry_size"><code>max_mount_and_namespace_table_entry_size</code> parameter</a>
</td>
</tr>
<tr>
<td style=>
Transit
</td>
<td style=>GA</td>
<td style=>
Use cipher-based message authentication code (CMAC) with AES symmetric
keys in the Vault Transit plugin.
<br /><br />
Learn more: <a href="/docs/secrets/transit#aes256-cmac">CMAC support</a>
</td>
</tr>
<tr>
<td style=>
Plugin identity tokens
</td>
<td style=>GA</td>
<td style=>
Enable AWS, Azure, and GCP authentication flows with workload identity
federation (WIF) tokens from the associated secrets plugins without
explicitly configuring sensitive security credentials.
<br /><br />
Learn more: <a href="/vault/docs/secrets/aws#plugin-workload-identity-federation-wif">Plugin WIF overview</a>
</td>
</tr>
<tr>
<td style=>
LDAP Secrets Engine
</td>
<td style=>GA</td>
<td style=>
Use hierarchical paths with roles and set names to define policies that
map 1-1 to LDAP secrets engine roles.
<br /><br />
Learn more: <a href="/vault/docs/secrets/ldap#hierarchical-paths">Hierarchical paths</a> overview
</td>
</tr>
<tr>
<td style=>
Clock skew and lag detection
</td>
<td style=>GA</td>
<td style=>
Use the <code>sys/health</code> and <code>sys/ha-status</code> endpoints
to display lags in performance secondaries and performance standby nodes.
<br /><br />
Learn more: <a href="/vault/docs/enterprise/consistency#clock-skew-and-replication-lag">Clock skew and replication lag</a> overview
</td>
</tr>
</tbody>
</table>
## Feature deprecations and EOL
Deprecated in 1.17 | Retired in 1.17
------------------ | ---------------
None | Centrify Auth plugin
@include 'release-notes/deprecation-note.mdx' | vault | layout docs page title 1 17 0 release notes description Key updates for Vault 1 17 0 Vault 1 17 0 release notes GA date 2024 06 12 include release notes intro mdx Important changes Change Description New default 1 17 Allowed audit headers now have unremovable defaults vault docs upgrading upgrade to 1 17 x audit headers Opt out feature 1 17 PKI sign intermediate now truncates notAfter field to signing issuer vault docs upgrading upgrade to 1 17 x pki truncate Beta feature deprecated 1 17 Request limiter deprecated vault docs upgrading upgrade to 1 17 x request limiter Known issue 1 17 0 PKI OCSP GET requests can return HTTP redirect responses vault docs upgrading upgrade to 1 17 x pki ocsp Known issue 1 17 0 Vault Agent and Vault Proxy consume excessive amounts of CPU vault docs upgrading upgrade to 1 17 x agent proxy cpu 1 17 Known issue 1 15 8 1 15 9 1 16 0 1 16 3 Autopilot upgrade for Vault Enterprise fails vault docs upgrading upgrade to 1 16 x new nodes added by autopilot upgrades provisioned with the wrong version Known issue 1 17 0 1 17 2 Vault standby nodes not deleting removed entity aliases from in memory database vault docs upgrading upgrade to 1 17 x dangling entity alias in memory Known issue 1 17 0 1 17 3 AWS Auth AssumeRole requires an external ID even if none is set vault docs upgrading upgrade to 1 17 x aws auth role configuration requires an external id Known Issue 0 7 0 Duplicate identity groups created vault docs upgrading upgrade to 1 17 x duplicate identity groups created when concurrent requests sent to the primary and pr secondary cluster Known Issue 0 7 0 Manual entity merges fail vault docs upgrading upgrade to 1 17 x manual entity merges sent to a pr secondary cluster are not persisted to storage Known Issue 1 17 3 1 17 4 Some values in the audit logs not hmac d properly vault docs upgrading upgrade to 1 17 x client tokens and token accessors audited in plaintext Known Issue 1 17 0 1 17 5 Cached activation flags for secrets sync on follower nodes are not updated vault docs upgrading upgrade to 1 17 x cached activation flags for secrets sync on follower nodes are not updated New default 1 17 9 Vault product usage metrics reporting vault docs upgrading upgrade to 1 17 x product usage reporting Deprecation 1 17 9 default report months is deprecated for the sys internal counters API vault docs upgrading upgrade to 1 17 x activity log changes Vault companion updates Companion updates are Vault updates that live outside the main Vault binary None Core updates Follow the learn more links for more information or browse the list of Vault tutorials updated to highlight changes for the most recent GA release vault tutorials new release table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td style Security patches td td style ENHANCED td td style Various security improvements to remediate varying severity and informational findings from a 3rd party security audit td tr tr td style Vault Agent and Vault Proxy self healing tokens td td style ENHANCED td td style Auto authentication avoids agent proxy restarts and config changes by automatically re authenticating authN tokens to Vault br br Learn more a href vault docs agent and proxy autoauth Vault Agent and Vault Proxy auto auth a td tr tbody table Enterprise updates table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td style Adaptive overload protection td td style BETA td td style Prevent client requests from overwhelming a variety of server resources that could lead to poor server availability br br Learn more a href vault docs concepts adaptive overload protection Adaptive overload protection overview a td tr tr td style ACME Client Count td td style ENHANCED td td style To improve clarity around client counts Vault now separates ACME clients from non entity clients td tr tr td rowSpan 2 style Public Key Infrastructure PKI td td style GA td td style Automate certificate lifecycle management for IoT EST enabled devices with native EST protocol support br br Learn more a href vault docs secrets pki est Enrollment over Secure Transport EST a overview td tr tr td style GA td td style Submit custom metadata with certificate requests and store the additional information in Vault for further analysis br br Learn more a href vault api docs secret pki metadata PKI secrets engine API a td tr tr td rowSpan 3 style Resource management td td style ENHANCED td td style Vault now supports a greater number of namespaces and mounts for large scale Vault installations td tr tr td style GA td td style Use hierarchical mount paths to organize manage and control access to secret engine objects td tr tr td style GA td td style Safely override the max entry size to set different limits for specific storage entries that contain mount tables auth tables and namespace configuration data br br Learn more a href vault docs configuration storage raft max mount and namespace table entry size code max mount and namespace table entry size code parameter a td tr tr td style Transit td td style GA td td style Use cipher based message authentication code CMAC with AES symmetric keys in the Vault Transit plugin br br Learn more a href docs secrets transit aes256 cmac CMAC support a td tr tr td style Plugin identity tokens td td style GA td td style Enable AWS Azure and GCP authentication flows with workload identity federation WIF tokens from the associated secrets plugins without explicitly configuring sensitive security credentials br br Learn more a href vault docs secrets aws plugin workload identity federation wif Plugin WIF overview a td tr tr td style LDAP Secrets Engine td td style GA td td style Use hierarchical paths with roles and set names to define policies that map 1 1 to LDAP secrets engine roles br br Learn more a href vault docs secrets ldap hierarchical paths Hierarchical paths a overview td tr tr td style Clock skew and lag detection td td style GA td td style Use the code sys health code and code sys ha status code endpoints to display lags in performance secondaries and performance standby nodes br br Learn more a href vault docs enterprise consistency clock skew and replication lag Clock skew and replication lag a overview td tr tbody table Feature deprecations and EOL Deprecated in 1 17 Retired in 1 17 None Centrify Auth plugin include release notes deprecation note mdx |
vault GA date 2024 10 09 page title 1 18 0 release notes Key updates for Vault 1 18 0 layout docs Vault 1 18 0 release notes | ---
layout: docs
page_title: "1.18.0 release notes"
description: |-
Key updates for Vault 1.18.0
---
# Vault 1.18.0 release notes
**GA date:** 2024-10-09
@include 'release-notes/intro.mdx'
## Important changes
| Change | Description |
|-----------------------------|----------------------------------------------------------------------------------------------------------------------|
| New default (1.18.0) | [Default activity log querying period](/vault/docs/upgrading/upgrade-to-1.18.x#default-activity-log-querying-period) |
| New default (1.18.0) | [Docker image no longer contains curl](/vault/docs/upgrading/upgrade-to-1.18.x#docker-image-no-longer-contains-curl) |
| Beta feature removed (1.18) | [Request limiter removed](/vault/docs/upgrading/upgrade-to-1.18.x#request-limiter-configuration-removal) |
| New default (1.18.2) | [Vault product usage metrics reporting](/vault/docs/upgrading/upgrade-to-1.18.x#product-usage-reporting) |
## Vault companion updates
Companion updates are Vault updates that live outside the main Vault binary.
**None**.
## Community updates
Follow the learn more links for more information, or browse the list of
[Vault tutorials updated to highlight changes for the most recent GA release](/vault/tutorials/new-release).
**None**.
## Enterprise updates
<table>
<thead>
<tr>
<th style=>Release</th>
<th style=>Update</th>
<th style=>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style=>
Adaptive overload protection
</td>
<td style=>GA</td>
<td style=>
Prevent client requests from overwhelming a variety of server resources
that could lead to poor server availability.
<br /><br />
Learn more: <a href="/vault/docs/concepts/adaptive-overload-protection">Adaptive overload protection overview</a>
</td>
</tr>
<tr>
<td style=>
Autopilot
</td>
<td style=>ENHANCED</td>
<td style=>
Overall stability improvements.
<br /><br />
Learn more: <a href="/vault/docs/concepts/integrated-storage/autopilot">Autopilot overview</a>
</td>
</tr>
<tr>
<td style=>
Client count
</td>
<td style=>ENHANCED</td>
<td style=>
Improved clarity around metering and billing attribution.
<br /><br />
Learn more: <a href="/vault/docs/concepts/client-count/counting">Client count calculations</a>
</td>
</tr>
<tr>
<td style=>
PKI CMPv2
</td>
<td style=>GA</td>
<td style=>
Enable PKI support for automated certificate enrollment with CMPv2
protocols for 5G networks per 3G PP standards.
<br /><br />
Learn more: <a href="/vault/docs/secrets/pki/cmpv2">CMPv2 in the Vault PKI plugin</a>
</td>
</tr>
<tr>
<td style=>
Vault UI
</td>
<td style=>GA</td>
<td style=>
Use the Vault UI to configure AWS WIF plugins.
<br /><br />
Learn more: <a href="/vault/docs/secrets/aws#plugin-workload-identity-federation-wif">AWS WIF</a>
</td>
</tr>
<tr>
<td style=>
PostgreSQL plugin
</td>
<td style=>GA</td>
<td style=>
Use rootless rotation for PostgreSQL static roles so individual database
accounts can rotate their own passwords.
<br /><br />
Learn more: <a href="/vault/docs/secrets/databases/postgresql">PostgreSQL plugin overview</a>
</td>
</tr>
<tr>
<td style=>
KV Patch and Subkey support in Vault’s GUI
</td>
<td style=>GA</td>
<td style=>
Configure GUI access to key names in the KV plugin for users without
granting read access to the values.
</td>
</tr>
<tr>
<td style=>
Vault Enterprise with HSM for ARM architecture
</td>
<td style=>GA</td>
<td style=>
Run Vault Enterprise on ARM machines with Hardware Security Modules.
<br /><br />
Vault releases: <a href="https://releases.hashicorp.com/vault/">releases.hashicorp.com/vault</a>
</td>
</tr>
</tbody>
</table>
## Feature deprecations and EOL
Deprecated in 1.18.x | Retired in 1.18.x
-------------------- | ---------------
None | None
@include 'release-notes/deprecation-note.mdx' | vault | layout docs page title 1 18 0 release notes description Key updates for Vault 1 18 0 Vault 1 18 0 release notes GA date 2024 10 09 include release notes intro mdx Important changes Change Description New default 1 18 0 Default activity log querying period vault docs upgrading upgrade to 1 18 x default activity log querying period New default 1 18 0 Docker image no longer contains curl vault docs upgrading upgrade to 1 18 x docker image no longer contains curl Beta feature removed 1 18 Request limiter removed vault docs upgrading upgrade to 1 18 x request limiter configuration removal New default 1 18 2 Vault product usage metrics reporting vault docs upgrading upgrade to 1 18 x product usage reporting Vault companion updates Companion updates are Vault updates that live outside the main Vault binary None Community updates Follow the learn more links for more information or browse the list of Vault tutorials updated to highlight changes for the most recent GA release vault tutorials new release None Enterprise updates table thead tr th style Release th th style Update th th style Description th tr thead tbody tr td style Adaptive overload protection td td style GA td td style Prevent client requests from overwhelming a variety of server resources that could lead to poor server availability br br Learn more a href vault docs concepts adaptive overload protection Adaptive overload protection overview a td tr tr td style Autopilot td td style ENHANCED td td style Overall stability improvements br br Learn more a href vault docs concepts integrated storage autopilot Autopilot overview a td tr tr td style Client count td td style ENHANCED td td style Improved clarity around metering and billing attribution br br Learn more a href vault docs concepts client count counting Client count calculations a td tr tr td style PKI CMPv2 td td style GA td td style Enable PKI support for automated certificate enrollment with CMPv2 protocols for 5G networks per 3G PP standards br br Learn more a href vault docs secrets pki cmpv2 CMPv2 in the Vault PKI plugin a td tr tr td style Vault UI td td style GA td td style Use the Vault UI to configure AWS WIF plugins br br Learn more a href vault docs secrets aws plugin workload identity federation wif AWS WIF a td tr tr td style PostgreSQL plugin td td style GA td td style Use rootless rotation for PostgreSQL static roles so individual database accounts can rotate their own passwords br br Learn more a href vault docs secrets databases postgresql PostgreSQL plugin overview a td tr tr td style KV Patch and Subkey support in Vault s GUI td td style GA td td style Configure GUI access to key names in the KV plugin for users without granting read access to the values td tr tr td style Vault Enterprise with HSM for ARM architecture td td style GA td td style Run Vault Enterprise on ARM machines with Hardware Security Modules br br Vault releases a href https releases hashicorp com vault releases hashicorp com vault a td tr tbody table Feature deprecations and EOL Deprecated in 1 18 x Retired in 1 18 x None None include release notes deprecation note mdx |
vault Key rotation Vault stores different encryption keys for different purposes Vault uses key layout docs page title Key Rotation rotation to periodically change the keys according to a configured limit or in Learn about the details of key rotation within Vault | ---
layout: docs
page_title: Key Rotation
description: Learn about the details of key rotation within Vault.
---
# Key rotation
Vault stores different encryption keys for different purposes. Vault uses key
rotation to periodically change the keys according to a configured limit or in
response to a potential leak or compromised service.
## Relevant key definitions
There are four keys involved in key rotation:
- **internal encryption key** - Encrypts and protects data written to the
storage backend.
- **root key** - "Master" key that seals Vault and protects the internal
encryption key.
- **unseal key** - A portion (share) of the root key used to reconstruct the
root key. By default, Vault uses the
[Shamir's secret sharing algorithm](https://en.wikipedia.org/wiki/Shamir's_Secret_Sharing)
to split the root key into 5 shares.
- **upgrade key** - A short-lived copy of the internal encryption key created
during key rotation in high-availability deployments. Vault encrypts upgrade
keys using the previous internal encryption key.
## How key rotation works
Vault supports online **rekey** and **rotate** operations to update the root
key, unseal keys, and backend encryption key even for high-availability
deployments. In replicated deployments, the active node performs the operations
and standby nodes use an upgrade key to update their keys without requiring a
manual unseal operation.
1. Rekeying begins with a configured split and threshold for unseal keys:
1. Vault receives the configured threshold of unseal keys.
1. Vault generates and splits the new root key.
1. Vault re-encrypts the internal encryption key with the new root key.
1. Vault returns the new unseal keys.
1. Rotation begins:
1. Vault generates a new internal encryption key.
1. Vault adds the new encryption key to an internal keyring.
1. Vault creates a temporary **upgrade key** (if needed).

Once the rotation completes, Vault can encrypt new writes to the storage backend
using the new key, but still decrypt entries written under the previous key.
<Tip title="Related API endpoints">
ConfigureKeyRotation - [`POST:/sys/rotate/config`](/vault/api-docs/system/rotate-config)
</Tip>
## NIST rotation guidance
The National Institute of Standards and Technology (NIST) recommends
periodically rotating encryption keys, even without a leak or compromise event.
Due to the nature of AES-256-GCM encryption,
[NIST publication 800-38D](https://csrc.nist.gov/pubs/sp/800/38/d/final)
recommends rotating keys **before** performing ~2<sup>32</sup> encryptions. By
default, Vault monitors the `vault.barrier.estimated_encryptions` metric and
automatically rotates the backend encryption key before reaching 2<sup>32</sup>
encryption operations.
You can approximate the `vault.barrier.estimated_encryptions` metric with the
following sum:
<CodeBlockConfig hideClipboard>
```text
ESTIMATED_OPS = PUT_EVENTS + CREATE_EVENTS + MERKLE_FLUSH_EVENTS + WAL_INDEX
```
</CodeBlockConfig>
where:
- **`PUT_EVENTS`** is the `vault.barrier.put` telemetry metric.
- **`CREATION_EVENTS`** is the `vault.token.creation` metric where `token_type`
is `batch`.
- **`MERKLE_FLUSH_EVENTS`** is the `merkle.flushDirty.num_pages` telemetry metric.
- **`WAL_INDEX`** is the current write-ahead-log index.
<Tip>
Vault periodically persists the number of encryptions to support rotation. The
save operation has a 1 second timeout to limit performance impact when Vault is
under heavy load. If you use seal wrap, persisting encryptions involves the seal
backend, which means that some seals, like HSMs, may routinely take longer than
1 second to respond. You can override the save timeout by setting the
`VAULT_ENCRYPTION_COUNT_PERSIST_TIMEOUT` environment variable on your Vault
server to a larger value, such as "5s".
</Tip> | vault | layout docs page title Key Rotation description Learn about the details of key rotation within Vault Key rotation Vault stores different encryption keys for different purposes Vault uses key rotation to periodically change the keys according to a configured limit or in response to a potential leak or compromised service Relevant key definitions There are four keys involved in key rotation internal encryption key Encrypts and protects data written to the storage backend root key Master key that seals Vault and protects the internal encryption key unseal key A portion share of the root key used to reconstruct the root key By default Vault uses the Shamir s secret sharing algorithm https en wikipedia org wiki Shamir s Secret Sharing to split the root key into 5 shares upgrade key A short lived copy of the internal encryption key created during key rotation in high availability deployments Vault encrypts upgrade keys using the previous internal encryption key How key rotation works Vault supports online rekey and rotate operations to update the root key unseal keys and backend encryption key even for high availability deployments In replicated deployments the active node performs the operations and standby nodes use an upgrade key to update their keys without requiring a manual unseal operation 1 Rekeying begins with a configured split and threshold for unseal keys 1 Vault receives the configured threshold of unseal keys 1 Vault generates and splits the new root key 1 Vault re encrypts the internal encryption key with the new root key 1 Vault returns the new unseal keys 1 Rotation begins 1 Vault generates a new internal encryption key 1 Vault adds the new encryption key to an internal keyring 1 Vault creates a temporary upgrade key if needed Key Rotate img vault key rotate png Once the rotation completes Vault can encrypt new writes to the storage backend using the new key but still decrypt entries written under the previous key Tip title Related API endpoints ConfigureKeyRotation POST sys rotate config vault api docs system rotate config Tip NIST rotation guidance The National Institute of Standards and Technology NIST recommends periodically rotating encryption keys even without a leak or compromise event Due to the nature of AES 256 GCM encryption NIST publication 800 38D https csrc nist gov pubs sp 800 38 d final recommends rotating keys before performing 2 sup 32 sup encryptions By default Vault monitors the vault barrier estimated encryptions metric and automatically rotates the backend encryption key before reaching 2 sup 32 sup encryption operations You can approximate the vault barrier estimated encryptions metric with the following sum CodeBlockConfig hideClipboard text ESTIMATED OPS PUT EVENTS CREATE EVENTS MERKLE FLUSH EVENTS WAL INDEX CodeBlockConfig where PUT EVENTS is the vault barrier put telemetry metric CREATION EVENTS is the vault token creation metric where token type is batch MERKLE FLUSH EVENTS is the merkle flushDirty num pages telemetry metric WAL INDEX is the current write ahead log index Tip Vault periodically persists the number of encryptions to support rotation The save operation has a 1 second timeout to limit performance impact when Vault is under heavy load If you use seal wrap persisting encryptions involves the seal backend which means that some seals like HSMs may routinely take longer than 1 second to respond You can override the save timeout by setting the VAULT ENCRYPTION COUNT PERSIST TIMEOUT environment variable on your Vault server to a larger value such as 5s Tip |
vault Learn about integrated Raft storage in Vault Vault supports several options for durable information storage Each backend Integrated Raft storage page title Raft integrated storage layout docs offers pros cons advantages and trade offs For example some backends | ---
layout: docs
page_title: Raft integrated storage
description: Learn about integrated Raft storage in Vault.
---
# Integrated Raft storage
Vault supports several options for durable information storage. Each backend
offers pros, cons, advantages, and trade-offs. For example, some backends
support high availability while others provide a more robust backup and
restoration process. Integrated storage is a "built-in" storage option that
supports backup/restore workflows, high availability, and Enterprise replication
features without relying on third-party systems.
## Raft protocol overview
<Highlight>
[The Secret Lives of Data] has a nice visual explanation of Raft storage.
</Highlight>
Raft storage uses a [consensus protocol] based on [Paxos] and the work in
["Raft: In search of an Understandable Consensus Algorithm"] to provide CAP
[consistency].
Raft performance is bound by disk I/O and network latency, and
comparable to Paxos. With stable leadership, committing a log entry requires a
single round trip to half of the peer set.
Compared to Paxos, Raft is designed to have fewer states and a simpler, more
understandable algorithm that depends on the following elements:
- **Log** - An ordered sequence of entries (replicated log) that tracks cluster
changes. For example, writing data is a new event, which creates a
corresponding log entry.
- **Peer set** - The set of all members participating in log replication. All
server nodes are in the peer set of the local cluster.
- **Leader** - At any given time, the peer set elects a single node to be the
leader. Leaders ingest new log entries, replicate the log to followers, and
manage when an entry should be committed. Leaders manage log replication and
inconsistencies within replicated log entries may indicate an issue with the
leader.
- **Quorum** - A majority of members from a peer set. For a peer set of size `N`,
quorum requires at least `ceil( (N + 1) / 2 )` members. For example, quorum in
a peer set of 5 members requires 3 nodes. If a cluster cannot achieve quorum,
**the cluster becomes unavailable** and cannot commit new logs.
- **Committed entry** - A log entry that is replicated to a quorum of nodes. Log
entries are only applied once they are committed.
- **Deterministic finite-state machine ([DFSM])** - A collection of known states
with predictable transitions between the states. In Raft, the DFSM transitions
between states whenever new logs are applied. By DFSM rules, multiple
applications of the same sequence of logs must always result in the same final
state.
### Node states
Raft nodes are always in one of following states:
- **follower** - All nodes start as a follower. Followers accept log entries
from a leader and cast votes for leader selection.
- **candidate** - A node self-promotes to the candidate state whenever it goes
without receiving log entries for a given period of time. During
self-promotion, candidates request votes from the rest of their peer set.
- **leader** - Nodes become leaders once they receive a quorum of votes as a
candidate.
### Writing logs
With Raft, a log entry is an opaque binary blob. Once the peer set elects a
leader, the peer set can accept new log entries. When clients ask the set to
append a new log entry, the leader writes the entry to durable storage and tries
to replicate the data to a quorum of followers. Once the log entry is
**committed**, the leader **applies** the log entry to a deterministic finite
state machine to maintain the cluster state.
<Note title="Raft in Vault">
Vault uses [BoltDB](https://github.com/etcd-io/bbolt) or WAL Raft as the
deterministic finite state machine and blocks writes until they are both
committed **and** applied.
</Note>
### Compacting logs
To avoid unbounded growth in the replicated logs, Raft saves the current state
to snapshots then compacts the associated logs. Because the finite-state machine
is deterministic, restoring a snapshot of the DFSM always results in the same
state as replaying the sequence of logs associated with the snapshot. Taking
snapshots lets Raft capture the DFSM state at any point in time and then remove
the logs used to reach that state, thereby compacting the log data.
<Note title="Raft in Vault">
Vault compacts logs automatically to prevent unbounded disk usage while also
minimizing the time spent replaying logs. Using BoltDB as the DFSM also keeps
the Vault snapshots lightweight because the Vault data is already persisted to
disk in BoltDB, the snapshot process just needs to truncate the Raft logs.
</Note>
### Quorum
Raft consensus is fault-tolerant when a peer set has quorum. However, when a
quorum of nodes is **not** available, the peer set cannot process log entries,
elect leaders, or manage peer membership.
For example, suppose there are only 2 peers: A and B. To have quorum, both nodes
must participate, so the quorum size is 2. As a result, both nodes must agree
before they can commit a log entry. If one of the nodes fails, the remaining
node cannot reach quorum; the peer set can no longer add or remove nodes or
commit additional log entries. When the peer set can no longer take action, it
becomes **unavailable**. Once a peer set becomes unavailable, it can only be
recovered manually by removing the failing node and restarting the remaining
node in bootstrap mode so it self-elects as leader.
## Raft leadership in Vault
When a single Vault server (node)
[initializes](/vault/docs/commands/operator/init/#operator-init), it establishes
a cluster (peer set) of size 1 and self-elects itself as leader. Once the
cluster has a leader, additional servers can join the cluster using an
encrypted challenge/answer workflow. For the join process to work, all nodes
in a single Raft cluster must share the same seal configuration. If the cluster
is configured to use auto-unseal, the join process automatically decrypts the
challenge and responds with the answer using the configured seal. For other seal
options, like a Shamir seal, nodes must have access to the unseal keys before
joining so they can decrypt the challenge and respond with the decrypted answer.
In a [high availability](/vault/docs/internals/high-availability#design-overview)
configuration, the active Vault node is the leader node and all standby nodes
are followers.
## Leadership elections
Nodes become the Raft leader through Raft leadership elections.
All nodes in a Raft cluster start as **followers**. Followers monitor leader
health through a **leader heartbeat**. If a follower does not receive a heartbeat
within the configured **heartbeat timeout**, the node becomes a **candidate**.
Candidates watch for election notices from other nodes in the cluster. If the
**election timeout** period expires, the candidate starts an election for
leader. If the candidate gets responses from a quorum of other nodes in the
cluster, the candidate becomes the new leader node.
Raft leaders may step down voluntarily if the node cannot connect to a quorum
of nodes with the **leader lease timeout** period.
The relevant timeout periods (heartbeat timeout, election timeout, leader lease
timeout) scale according to the [`performance_multiplier`](/vault/docs/configuration/storage/raft#performance-multiplier) setting in your Vault configuration. By default,
the `performance_multiplier` is 5, which translates to the following timeout
values:
Timeout | Default duration
-------------------- | ----------------
Heartbeat timeout | 5 seconds
Election timeout | 5 seconds
Leader lease timeout | 2.5 seconds
We recommend using the default multiplier unless one of the following is true:
- Platform telemetry strongly indicates the default behavior is insufficient.
- The reliability of your platform or network requires different behavior.
## BoltDB Raft logs
BoltDB is a single file database, which means BoltDB cannot shrink the file on
disk to recover space when you delete data. Instead, BoltDB notes the places
where the deleted data was stored on a "freelist". On subsequent writes, BoltDB
consults the freelist to reuse old pages before allocating new space to persist
the data.
<Warning title="BoltDB requires careful tuning">
1. On Vault clusters with high churn, the BoltDB freelist can become quite large
and the database file can become highly fragmented. Large freelists and
fragmented database files can slow BoltDB transaction and directly impact the
performance of your Vault cluster.
1. On busy Vault clusters, where new followers struggle to sync Raft snapshots
before receiving subsequent snapshots from the leader, the BoltDB file is
susceptible to sudden bursts of writes. Not only will new followers potentially
fail to join quorum, Vault installations that do not provide for spiky file
growth or over-allocate and waste disk space will likely see poor performance.
</Warning>
## Write-ahead Raft logs
@include 'alerts/experimental.mdx'
By default, Vault uses the `raft-boltdb` library for BoltDB to store Raft logs,
but you can also configure Vault to use the
[`raft-wal`](https://github.com/hashicorp/raft-wal) library for write-ahead Raft
logs.
Library | Filename(s) | Storage directory
------------- |------------------------------------------------------------| ----------------
`raft-boltdb` | `raft.db` | `raft`
`raft-wal` | `wal-meta.db`, `XXXXXXXXXXXXXXXXXXXX-XXXXXXXXXXXXXXXX.wal` | `raft/wal`
The `raft-wal` library is designed specifically for storing Raft logs. Rather
than using a freelist like `raft-boltdb`, `raft-wal` maintains a directory of
files as its data store and compacts data over time to free up space when a
given file is no longer needed.
Storing data as files in a directory also means that the `raft-wal` library can
easily increase or decrease the number of logs retained by leaders before
truncating and compacting without risking poor performance from spiky writes.
## Quorum management in Vault
### With autopilot
With the [autopilot](/vault/docs/concepts/integrated-storage/autopilot) feature,
Vault uses a configurable set of parameters to confirm a node is healthy before
considering it an eligible voter in the quorum list.
Autopilot is enabled by default and includes stabilization logic for nodes
joining the cluster:
- A node joins the cluster as a non-voter.
- The joined node syncs with the current Raft index.
- Once the configured stability threshold is met, the node becomes a full voting
member of the cluster.
<Warning title="Verify your stability threshold is appropriate">
Setting the stability threshold too low can lead to cluster instability because
nodes may begin voting before they are fully in sync with the Raft index.
</Warning>
Autopilot also includes a dead server cleanup feature. When you enable dead
server cleanup with the
[Autopilot API](/vault/api-docs/system/storage/raftautopilot), Vault
automatically removes unhealthy nodes from the Raft cluster without manual
operator intervention.
### Without autopilot
Without autopilot, when a node joins a Raft cluster, the node tries to catch up
with the peer set just by replicating data received from the leader. While the
node is in the initial synchronization state, it cannot vote, but **is** counted for
the purposes of quorum. If multiple nodes join the cluster simultaneously (or
within a small enough window) the cluster may exceed the expected failure
tolerance, quorum may be lost, and the cluster can fail.
For example, consider a 3-node cluster with a large amount of data and a failure
tolerance of 1. If 3 nodes join the cluster at the same time, the cluster size
becomes 6 with an expected failure tolerance of 2. But 3 of the nodes are still
synchronizing and cannot vote, which means the cluster loses quorum.
If you are not using autopilot, we strongly recommend that you ensure all new
nodes have Raft indexes that are in sync (or very close to in sync) with the
leader before adding additional nodes. You can check the status of current Raft
indexes with the `vault status` CLI command.
## Quorum size and failure tolerance
The table below compares quorum size and failure tolerance for various
cluster sizes.
Servers | Quorum size | Failure tolerance
:-----: | :---------: | :---------------:
1 | 1 | 0
2 | 2 | 0
3 | 2 | 1
4 | 3 | 1
**5** | **3** | **2**
6 | 4 | 2
7 | 4 | 3
<Highlight title="Best practice">
For best performance, we recommended at least 5 servers for a standard
production deployment to maintained a minimum failure tolerance of 2. We also
recommend maintaining a cluster with an odd number of nodes to avoid voting
stalemates.
We **strongly discourage** single server deployments for production use due to
the high risk of data loss during failure scenarios.
</Highlight>
To maintain failure tolerance during maintenance and other changes, we recommend
sequentially scaling and reverting your cluster, 2 nodes at a time.
For example, if you start with a 5-node cluster:
1. Scale the cluster to 7 nodes.
1. Confirm the new nodes are joined and in sync with the rest of the peer set.
1. Stop or destroy 2 of the older nodes.
1. Repeat this process 2 more times to cycle out the rest of the pre-existing nodes.
You should always maintain quorum to limit the impact on failure tolerance when
changing or scaling your Vault instance.
### Redundancy Zones
If you are using autopilot with [redundancy zones](/vault/docs/enterprise/redundancy-zones),
the total number of servers will be different from the above, and is dependent
on how many redundancy zones and servers per redundancy zone that you choose.
@include 'autopilot/redundancy-zones.mdx'
<Highlight title="Best practice">
If you choose to use redundancy zones, we **strongly recommend** using at least 3
zones to ensure failure tolerance.
</Highlight>
Redundancy zones | Servers per zone | Quorum size | Failure tolerance | Optimistic failure tolerance
:--------------: | :--------------: | :---------: | :---------------: | :--------------------------:
2 | 2 | 2 | 0 | 2
3 | 2 | 2 | 1 | 3
3 | 3 | 2 | 1 | 5
5 | 2 | 3 | 2 | 6
[consensus protocol]: https://en.wikipedia.org/wiki/Consensus_(computer_science)
[consistency]: https://en.wikipedia.org/wiki/CAP_theorem
["Raft: In search of an Understandable Consensus Algorithm"]: https://raft.github.io/raft.pdf
[paxos]: https://en.wikipedia.org/wiki/Paxos_%28computer_science%29
[The Secret Lives of Data]: http://thesecretlivesofdata.com/raft
[FSM]: https://en.wikipedia.org/wiki/Finite-state_machine | vault | layout docs page title Raft integrated storage description Learn about integrated Raft storage in Vault Integrated Raft storage Vault supports several options for durable information storage Each backend offers pros cons advantages and trade offs For example some backends support high availability while others provide a more robust backup and restoration process Integrated storage is a built in storage option that supports backup restore workflows high availability and Enterprise replication features without relying on third party systems Raft protocol overview Highlight The Secret Lives of Data has a nice visual explanation of Raft storage Highlight Raft storage uses a consensus protocol based on Paxos and the work in Raft In search of an Understandable Consensus Algorithm to provide CAP consistency Raft performance is bound by disk I O and network latency and comparable to Paxos With stable leadership committing a log entry requires a single round trip to half of the peer set Compared to Paxos Raft is designed to have fewer states and a simpler more understandable algorithm that depends on the following elements Log An ordered sequence of entries replicated log that tracks cluster changes For example writing data is a new event which creates a corresponding log entry Peer set The set of all members participating in log replication All server nodes are in the peer set of the local cluster Leader At any given time the peer set elects a single node to be the leader Leaders ingest new log entries replicate the log to followers and manage when an entry should be committed Leaders manage log replication and inconsistencies within replicated log entries may indicate an issue with the leader Quorum A majority of members from a peer set For a peer set of size N quorum requires at least ceil N 1 2 members For example quorum in a peer set of 5 members requires 3 nodes If a cluster cannot achieve quorum the cluster becomes unavailable and cannot commit new logs Committed entry A log entry that is replicated to a quorum of nodes Log entries are only applied once they are committed Deterministic finite state machine DFSM A collection of known states with predictable transitions between the states In Raft the DFSM transitions between states whenever new logs are applied By DFSM rules multiple applications of the same sequence of logs must always result in the same final state Node states Raft nodes are always in one of following states follower All nodes start as a follower Followers accept log entries from a leader and cast votes for leader selection candidate A node self promotes to the candidate state whenever it goes without receiving log entries for a given period of time During self promotion candidates request votes from the rest of their peer set leader Nodes become leaders once they receive a quorum of votes as a candidate Writing logs With Raft a log entry is an opaque binary blob Once the peer set elects a leader the peer set can accept new log entries When clients ask the set to append a new log entry the leader writes the entry to durable storage and tries to replicate the data to a quorum of followers Once the log entry is committed the leader applies the log entry to a deterministic finite state machine to maintain the cluster state Note title Raft in Vault Vault uses BoltDB https github com etcd io bbolt or WAL Raft as the deterministic finite state machine and blocks writes until they are both committed and applied Note Compacting logs To avoid unbounded growth in the replicated logs Raft saves the current state to snapshots then compacts the associated logs Because the finite state machine is deterministic restoring a snapshot of the DFSM always results in the same state as replaying the sequence of logs associated with the snapshot Taking snapshots lets Raft capture the DFSM state at any point in time and then remove the logs used to reach that state thereby compacting the log data Note title Raft in Vault Vault compacts logs automatically to prevent unbounded disk usage while also minimizing the time spent replaying logs Using BoltDB as the DFSM also keeps the Vault snapshots lightweight because the Vault data is already persisted to disk in BoltDB the snapshot process just needs to truncate the Raft logs Note Quorum Raft consensus is fault tolerant when a peer set has quorum However when a quorum of nodes is not available the peer set cannot process log entries elect leaders or manage peer membership For example suppose there are only 2 peers A and B To have quorum both nodes must participate so the quorum size is 2 As a result both nodes must agree before they can commit a log entry If one of the nodes fails the remaining node cannot reach quorum the peer set can no longer add or remove nodes or commit additional log entries When the peer set can no longer take action it becomes unavailable Once a peer set becomes unavailable it can only be recovered manually by removing the failing node and restarting the remaining node in bootstrap mode so it self elects as leader Raft leadership in Vault When a single Vault server node initializes vault docs commands operator init operator init it establishes a cluster peer set of size 1 and self elects itself as leader Once the cluster has a leader additional servers can join the cluster using an encrypted challenge answer workflow For the join process to work all nodes in a single Raft cluster must share the same seal configuration If the cluster is configured to use auto unseal the join process automatically decrypts the challenge and responds with the answer using the configured seal For other seal options like a Shamir seal nodes must have access to the unseal keys before joining so they can decrypt the challenge and respond with the decrypted answer In a high availability vault docs internals high availability design overview configuration the active Vault node is the leader node and all standby nodes are followers Leadership elections Nodes become the Raft leader through Raft leadership elections All nodes in a Raft cluster start as followers Followers monitor leader health through a leader heartbeat If a follower does not receive a heartbeat within the configured heartbeat timeout the node becomes a candidate Candidates watch for election notices from other nodes in the cluster If the election timeout period expires the candidate starts an election for leader If the candidate gets responses from a quorum of other nodes in the cluster the candidate becomes the new leader node Raft leaders may step down voluntarily if the node cannot connect to a quorum of nodes with the leader lease timeout period The relevant timeout periods heartbeat timeout election timeout leader lease timeout scale according to the performance multiplier vault docs configuration storage raft performance multiplier setting in your Vault configuration By default the performance multiplier is 5 which translates to the following timeout values Timeout Default duration Heartbeat timeout 5 seconds Election timeout 5 seconds Leader lease timeout 2 5 seconds We recommend using the default multiplier unless one of the following is true Platform telemetry strongly indicates the default behavior is insufficient The reliability of your platform or network requires different behavior BoltDB Raft logs BoltDB is a single file database which means BoltDB cannot shrink the file on disk to recover space when you delete data Instead BoltDB notes the places where the deleted data was stored on a freelist On subsequent writes BoltDB consults the freelist to reuse old pages before allocating new space to persist the data Warning title BoltDB requires careful tuning 1 On Vault clusters with high churn the BoltDB freelist can become quite large and the database file can become highly fragmented Large freelists and fragmented database files can slow BoltDB transaction and directly impact the performance of your Vault cluster 1 On busy Vault clusters where new followers struggle to sync Raft snapshots before receiving subsequent snapshots from the leader the BoltDB file is susceptible to sudden bursts of writes Not only will new followers potentially fail to join quorum Vault installations that do not provide for spiky file growth or over allocate and waste disk space will likely see poor performance Warning Write ahead Raft logs include alerts experimental mdx By default Vault uses the raft boltdb library for BoltDB to store Raft logs but you can also configure Vault to use the raft wal https github com hashicorp raft wal library for write ahead Raft logs Library Filename s Storage directory raft boltdb raft db raft raft wal wal meta db XXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXX wal raft wal The raft wal library is designed specifically for storing Raft logs Rather than using a freelist like raft boltdb raft wal maintains a directory of files as its data store and compacts data over time to free up space when a given file is no longer needed Storing data as files in a directory also means that the raft wal library can easily increase or decrease the number of logs retained by leaders before truncating and compacting without risking poor performance from spiky writes Quorum management in Vault With autopilot With the autopilot vault docs concepts integrated storage autopilot feature Vault uses a configurable set of parameters to confirm a node is healthy before considering it an eligible voter in the quorum list Autopilot is enabled by default and includes stabilization logic for nodes joining the cluster A node joins the cluster as a non voter The joined node syncs with the current Raft index Once the configured stability threshold is met the node becomes a full voting member of the cluster Warning title Verify your stability threshold is appropriate Setting the stability threshold too low can lead to cluster instability because nodes may begin voting before they are fully in sync with the Raft index Warning Autopilot also includes a dead server cleanup feature When you enable dead server cleanup with the Autopilot API vault api docs system storage raftautopilot Vault automatically removes unhealthy nodes from the Raft cluster without manual operator intervention Without autopilot Without autopilot when a node joins a Raft cluster the node tries to catch up with the peer set just by replicating data received from the leader While the node is in the initial synchronization state it cannot vote but is counted for the purposes of quorum If multiple nodes join the cluster simultaneously or within a small enough window the cluster may exceed the expected failure tolerance quorum may be lost and the cluster can fail For example consider a 3 node cluster with a large amount of data and a failure tolerance of 1 If 3 nodes join the cluster at the same time the cluster size becomes 6 with an expected failure tolerance of 2 But 3 of the nodes are still synchronizing and cannot vote which means the cluster loses quorum If you are not using autopilot we strongly recommend that you ensure all new nodes have Raft indexes that are in sync or very close to in sync with the leader before adding additional nodes You can check the status of current Raft indexes with the vault status CLI command Quorum size and failure tolerance The table below compares quorum size and failure tolerance for various cluster sizes Servers Quorum size Failure tolerance 1 1 0 2 2 0 3 2 1 4 3 1 5 3 2 6 4 2 7 4 3 Highlight title Best practice For best performance we recommended at least 5 servers for a standard production deployment to maintained a minimum failure tolerance of 2 We also recommend maintaining a cluster with an odd number of nodes to avoid voting stalemates We strongly discourage single server deployments for production use due to the high risk of data loss during failure scenarios Highlight To maintain failure tolerance during maintenance and other changes we recommend sequentially scaling and reverting your cluster 2 nodes at a time For example if you start with a 5 node cluster 1 Scale the cluster to 7 nodes 1 Confirm the new nodes are joined and in sync with the rest of the peer set 1 Stop or destroy 2 of the older nodes 1 Repeat this process 2 more times to cycle out the rest of the pre existing nodes You should always maintain quorum to limit the impact on failure tolerance when changing or scaling your Vault instance Redundancy Zones If you are using autopilot with redundancy zones vault docs enterprise redundancy zones the total number of servers will be different from the above and is dependent on how many redundancy zones and servers per redundancy zone that you choose include autopilot redundancy zones mdx Highlight title Best practice If you choose to use redundancy zones we strongly recommend using at least 3 zones to ensure failure tolerance Highlight Redundancy zones Servers per zone Quorum size Failure tolerance Optimistic failure tolerance 2 2 2 0 2 3 2 2 1 3 3 3 2 1 5 5 2 3 2 6 consensus protocol https en wikipedia org wiki Consensus computer science consistency https en wikipedia org wiki CAP theorem Raft In search of an Understandable Consensus Algorithm https raft github io raft pdf paxos https en wikipedia org wiki Paxos 28computer science 29 The Secret Lives of Data http thesecretlivesofdata com raft FSM https en wikipedia org wiki Finite state machine |
vault Due to the nature of Vault and the confidentiality of data it manages page title Security Model Security model the Vault security model is very critical The overall goal of Vault s security layout docs Learn about the security model of Vault | ---
layout: docs
page_title: Security Model
description: Learn about the security model of Vault.
---
# Security model
Due to the nature of Vault and the confidentiality of data it manages,
the Vault security model is very critical. The overall goal of Vault's security
model is to provide [confidentiality, integrity, availability, accountability,
authentication](https://en.wikipedia.org/wiki/Information_security).
This means that data at rest and in transit must be secure from eavesdropping
or tampering. Clients must be appropriately authenticated and authorized
to access data or modify policies. All interactions must be auditable and traced
uniquely back to the origin entity, and the system must be robust against intentional
attempts to bypass any of its access controls.
# Threat model
The following are the various parts of the Vault threat model:
- Eavesdropping on any Vault communication. Client communication with Vault
should be secure from eavesdropping as well as communication from Vault to
its storage backend or between Vault cluster nodes.
- Tampering with data at rest or in transit. Any tampering should be detectable
and cause Vault to abort processing of the transaction.
- Access to data or controls without authentication or authorization. All requests
must be proceeded by the applicable security policies.
- Access to data or controls without accountability. If audit logging
is enabled, requests and responses must be logged before the client receives
any secret material.
- Confidentiality of stored secrets. Any data that leaves Vault to rest in the
storage backend must be safe from eavesdropping. In practice, this means all
data at rest must be encrypted.
- Availability of secret material in the face of failure. Vault supports
running in a highly available configuration to avoid loss of availability.
The following are not considered part of the Vault threat model:
- Protecting against arbitrary control of the storage backend. An attacker
that can perform arbitrary operations against the storage backend can
undermine security in any number of ways that are difficult or impossible to protect
against. As an example, an attacker could delete or corrupt all the contents
of the storage backend causing total data loss for Vault. The ability to control
reads would allow an attacker to snapshot in a well-known state and rollback state
changes if that would be beneficial to them.
- Protecting against the leakage of the existence of secret material. An attacker
that can read from the storage backend may observe that secret material exists
and is stored, even if it is kept confidential.
- Protecting against memory analysis of a running Vault. If an attacker is able
to inspect the memory state of a running Vault instance, then the confidentiality
of data may be compromised.
- Protecting against flaws in external systems or services used by Vault.
Some authentication methods or secrets engines delegate sensitive operations to
systems external to Vault. If an attacker can compromise credentials or otherwise
exploit a vulnerability in these external systems, then the confidentiality or
integrity of data may be compromised.
- Protecting against malicious plugins or code execution on the underlying host.
If an attacker can gain code execution or write privileges to the underlying host,
then the confidentiality or the integrity of data may be compromised.
- Protecting against flaws in clients or systems that access Vault. If an attacker
can compromise a Vault client (e.g., system, browser) and obtain this client’s Vault
credentials, they can access Vault with the level of privilege associated with this
client.
- Protecting against Vault administrators supplying vulnerable or malicious configuration
data. Any data provided as configuration values to Vault's administrative endpoints
(e.g. [secret engines](/vault/docs/secrets) configurations), or Vault's
configuration files should be validated. If an attacker can write to Vault's
configuration, then the confidentiality or integrity of data can be compromised.
# External threat overview
Vault architecture compromises of three distinct systems:
- Client: Speaks to Vault over an API.
- Server: Provides an API and serves requests.
- Storage backend: Utilized by the server to read and write data.
There is no mutual trust between the Vault client and server. Clients use
[TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) to verify the
identity of the server and to establish a secure communication channel. Servers
require that a client provides a client token for every request which is used
to identify the client. A client that does not provide their token is only
permitted to make login requests.
All server-to-server traffic between Vault instances within a cluster (i.e,
high availability, enterprise replication or integrated storage) uses
mutually-authenticated TLS to ensure the confidentiality and integrity of data
in transit. Nodes are authenticated prior to joining the cluster by an
[unseal challenge](/vault/docs/concepts/integrated-storage#vault-networking-recap) or
a [one-time-use activation token](/vault/docs/enterprise/replication#security-model).
The storage backends used by Vault are also untrusted by design. Vault uses a
security barrier for all requests made to the backend. The security barrier
automatically encrypts all data leaving Vault using a 256-bit [Advanced
Encryption Standard
(AES)](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) cipher in
the [Galois Counter Mode
(GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) with 96-bit nonces.
The nonce is randomly generated for every encrypted object. When data is read
from the security barrier, the GCM authentication tag is verified during the
decryption process to detect any tampering.
Depending on the backend used, Vault may communicate with the backend over TLS
to provide an added layer of security. In some cases, such as a file backend,
this is not applicable. Because storage backends are untrusted, an eavesdropper
would only gain access to encrypted data even if communication with the backend
was intercepted.
# Internal threat overview
Within the Vault system, a critical security concern is an attacker attempting
to gain access to secret material they are not authorized to. This is an internal
threat if the attacker is already permitted to some level of access to Vault, and is
able to authenticate.
When a client first authenticates with Vault, an auth method is used to verify
the identity of the client and to return a list of associated ACL policies.
This association is configured by operators of Vault ahead of time. For
example, GitHub users in the "engineering" team may be mapped to the
"engineering" and "ops" Vault policies. Vault then generates a client token
which is a randomly generated, serialized value and maps it to the policy list.
This client token is then returned to the client.
On each request, a client provides this token. Vault then uses it to check that
the token is valid and has not been revoked or expired, and generates an ACL
based on the associated policies. Vault uses a strict default deny
enforcement strategy. This means unless an associated policy allows for a given action,
it will be denied. Each policy specifies a level of access granted to a path in
Vault. When the policies are merged (if multiple policies are associated with a
client), the highest access level permitted is used. For example, if the
"engineering" policy permits read/update access to the "eng/" path, and the
"ops" policy permits read access to the "ops/" path, then the user gets the
union of those. Policy is matched using the most specific defined policy, which
may be an exact match or the longest-prefix match glob pattern. See
[Policy Syntax](/vault/docs/concepts/policies#policy-syntax) for more details.
Certain operations are only permitted by "root" users, which is a distinguished
policy built into Vault. This is similar to the concept of a root user on a
Unix system or an administrator on Windows. In cases where clients are provided
with root tokens or associated with the root policy, Vault supports the
notion of "sudo" privilege. As part of a policy, users may be granted "sudo"
privileges to certain paths, so that they can still perform security sensitive
operations without being granted global root access to Vault.
Lastly, Vault supports using a [Two-person
rule](https://en.wikipedia.org/wiki/Two-person_rule) for unsealing using [Shamir's
Secret Sharing
technique](https://en.wikipedia.org/wiki/Shamir's_Secret_Sharing). When Vault
is started, it starts in a _sealed_ state. This means that the encryption key
needed to read and write from the storage backend is not yet known. The process
of unsealing requires providing the root key so that the encryption key can
be retrieved. The risk of distributing the root key is that a single
malicious attacker with access to it can decrypt the entire Vault. Instead,
Shamir's technique allows us to split the root key into multiple shares or
parts. The number of shares and the threshold needed is configurable, but by
default Vault generates 5 shares, any 3 of which must be provided to
reconstruct the root key.
By using a secret sharing technique, we avoid the need to place absolute trust
in the holder of the root key, and avoid storing the root key at all. The
root key is only retrievable by reconstructing the shares. The shares are not
useful for making any requests to Vault, and can only be used for unsealing.
Once unsealed the standard ACL mechanisms are used for all requests.
To make an analogy, a bank puts security deposit boxes inside of a vault. Each
security deposit box has a key, while the vault door has both a combination and
a key. The vault is encased in steel and concrete so that the door is the only
practical entrance. The analogy to Vault is that the cryptosystem is the
steel and concrete protecting the data. While you could tunnel through the
concrete or brute force the encryption keys, it would be prohibitively time
consuming. Opening the bank vault requires two-factors: the key and the
combination. Similarly, Vault requires multiple shares be provided to
reconstruct the root key. Once unsealed, each security deposit boxes still
requires that the owner provide a key, and similarly the Vault ACL system protects
all the secrets stored. | vault | layout docs page title Security Model description Learn about the security model of Vault Security model Due to the nature of Vault and the confidentiality of data it manages the Vault security model is very critical The overall goal of Vault s security model is to provide confidentiality integrity availability accountability authentication https en wikipedia org wiki Information security This means that data at rest and in transit must be secure from eavesdropping or tampering Clients must be appropriately authenticated and authorized to access data or modify policies All interactions must be auditable and traced uniquely back to the origin entity and the system must be robust against intentional attempts to bypass any of its access controls Threat model The following are the various parts of the Vault threat model Eavesdropping on any Vault communication Client communication with Vault should be secure from eavesdropping as well as communication from Vault to its storage backend or between Vault cluster nodes Tampering with data at rest or in transit Any tampering should be detectable and cause Vault to abort processing of the transaction Access to data or controls without authentication or authorization All requests must be proceeded by the applicable security policies Access to data or controls without accountability If audit logging is enabled requests and responses must be logged before the client receives any secret material Confidentiality of stored secrets Any data that leaves Vault to rest in the storage backend must be safe from eavesdropping In practice this means all data at rest must be encrypted Availability of secret material in the face of failure Vault supports running in a highly available configuration to avoid loss of availability The following are not considered part of the Vault threat model Protecting against arbitrary control of the storage backend An attacker that can perform arbitrary operations against the storage backend can undermine security in any number of ways that are difficult or impossible to protect against As an example an attacker could delete or corrupt all the contents of the storage backend causing total data loss for Vault The ability to control reads would allow an attacker to snapshot in a well known state and rollback state changes if that would be beneficial to them Protecting against the leakage of the existence of secret material An attacker that can read from the storage backend may observe that secret material exists and is stored even if it is kept confidential Protecting against memory analysis of a running Vault If an attacker is able to inspect the memory state of a running Vault instance then the confidentiality of data may be compromised Protecting against flaws in external systems or services used by Vault Some authentication methods or secrets engines delegate sensitive operations to systems external to Vault If an attacker can compromise credentials or otherwise exploit a vulnerability in these external systems then the confidentiality or integrity of data may be compromised Protecting against malicious plugins or code execution on the underlying host If an attacker can gain code execution or write privileges to the underlying host then the confidentiality or the integrity of data may be compromised Protecting against flaws in clients or systems that access Vault If an attacker can compromise a Vault client e g system browser and obtain this client s Vault credentials they can access Vault with the level of privilege associated with this client Protecting against Vault administrators supplying vulnerable or malicious configuration data Any data provided as configuration values to Vault s administrative endpoints e g secret engines vault docs secrets configurations or Vault s configuration files should be validated If an attacker can write to Vault s configuration then the confidentiality or integrity of data can be compromised External threat overview Vault architecture compromises of three distinct systems Client Speaks to Vault over an API Server Provides an API and serves requests Storage backend Utilized by the server to read and write data There is no mutual trust between the Vault client and server Clients use TLS https en wikipedia org wiki Transport Layer Security to verify the identity of the server and to establish a secure communication channel Servers require that a client provides a client token for every request which is used to identify the client A client that does not provide their token is only permitted to make login requests All server to server traffic between Vault instances within a cluster i e high availability enterprise replication or integrated storage uses mutually authenticated TLS to ensure the confidentiality and integrity of data in transit Nodes are authenticated prior to joining the cluster by an unseal challenge vault docs concepts integrated storage vault networking recap or a one time use activation token vault docs enterprise replication security model The storage backends used by Vault are also untrusted by design Vault uses a security barrier for all requests made to the backend The security barrier automatically encrypts all data leaving Vault using a 256 bit Advanced Encryption Standard AES https en wikipedia org wiki Advanced Encryption Standard cipher in the Galois Counter Mode GCM https en wikipedia org wiki Galois Counter Mode with 96 bit nonces The nonce is randomly generated for every encrypted object When data is read from the security barrier the GCM authentication tag is verified during the decryption process to detect any tampering Depending on the backend used Vault may communicate with the backend over TLS to provide an added layer of security In some cases such as a file backend this is not applicable Because storage backends are untrusted an eavesdropper would only gain access to encrypted data even if communication with the backend was intercepted Internal threat overview Within the Vault system a critical security concern is an attacker attempting to gain access to secret material they are not authorized to This is an internal threat if the attacker is already permitted to some level of access to Vault and is able to authenticate When a client first authenticates with Vault an auth method is used to verify the identity of the client and to return a list of associated ACL policies This association is configured by operators of Vault ahead of time For example GitHub users in the engineering team may be mapped to the engineering and ops Vault policies Vault then generates a client token which is a randomly generated serialized value and maps it to the policy list This client token is then returned to the client On each request a client provides this token Vault then uses it to check that the token is valid and has not been revoked or expired and generates an ACL based on the associated policies Vault uses a strict default deny enforcement strategy This means unless an associated policy allows for a given action it will be denied Each policy specifies a level of access granted to a path in Vault When the policies are merged if multiple policies are associated with a client the highest access level permitted is used For example if the engineering policy permits read update access to the eng path and the ops policy permits read access to the ops path then the user gets the union of those Policy is matched using the most specific defined policy which may be an exact match or the longest prefix match glob pattern See Policy Syntax vault docs concepts policies policy syntax for more details Certain operations are only permitted by root users which is a distinguished policy built into Vault This is similar to the concept of a root user on a Unix system or an administrator on Windows In cases where clients are provided with root tokens or associated with the root policy Vault supports the notion of sudo privilege As part of a policy users may be granted sudo privileges to certain paths so that they can still perform security sensitive operations without being granted global root access to Vault Lastly Vault supports using a Two person rule https en wikipedia org wiki Two person rule for unsealing using Shamir s Secret Sharing technique https en wikipedia org wiki Shamir s Secret Sharing When Vault is started it starts in a sealed state This means that the encryption key needed to read and write from the storage backend is not yet known The process of unsealing requires providing the root key so that the encryption key can be retrieved The risk of distributing the root key is that a single malicious attacker with access to it can decrypt the entire Vault Instead Shamir s technique allows us to split the root key into multiple shares or parts The number of shares and the threshold needed is configurable but by default Vault generates 5 shares any 3 of which must be provided to reconstruct the root key By using a secret sharing technique we avoid the need to place absolute trust in the holder of the root key and avoid storing the root key at all The root key is only retrievable by reconstructing the shares The shares are not useful for making any requests to Vault and can only be used for unsealing Once unsealed the standard ACL mechanisms are used for all requests To make an analogy a bank puts security deposit boxes inside of a vault Each security deposit box has a key while the vault door has both a combination and a key The vault is encased in steel and concrete so that the door is the only practical entrance The analogy to Vault is that the cryptosystem is the steel and concrete protecting the data While you could tunnel through the concrete or brute force the encryption keys it would be prohibitively time consuming Opening the bank vault requires two factors the key and the combination Similarly Vault requires multiple shares be provided to reconstruct the root key Once unsealed each security deposit boxes still requires that the owner provide a key and similarly the Vault ACL system protects all the secrets stored |
vault using this feature it is useful to understand the intended use cases design page title Replication layout docs Vault Enterprise 0 7 adds support for multi datacenter replication Before Replication Vault enterprise Learn about the details of multi datacenter replication within Vault | ---
layout: docs
page_title: Replication
description: Learn about the details of multi-datacenter replication within Vault.
---
# Replication (Vault enterprise)
Vault Enterprise 0.7 adds support for multi-datacenter replication. Before
using this feature, it is useful to understand the intended use cases, design
goals, and high level architecture.
Replication is based on a primary/secondary (1:N) model with asynchronous
replication, focusing on high availability for global deployments. The
trade-offs made in the design and implementation of replication reflect these
high level goals.
# Use cases
Vault replication is based on a number of common use cases:
- **Multi-Datacenter Deployments**: A common challenge is providing Vault to
applications across many datacenters in a highly-available manner. Running a
single Vault cluster imposes high latency of access for remote clients,
availability loss or outages during connectivity failures, and limits
scalability.
- **Backup Sites**: Implementing a robust business continuity plan around the
loss of a primary datacenter requires the ability to quickly and easily fail
to a hot backup site.
- **Scaling Throughput**: Applications that use Vault for
Encryption-as-a-Service or cryptographic offload may generate a very high
volume of requests for Vault. Replicating keys between multiple clusters
allows load to be distributed across additional servers to scale request
throughput.
# Design goals
Based on the use cases for Vault Replication, we had a number of design goals
for the implementation:
- **Availability**: Global deployments of Vault require high levels of
availability, and can tolerate reduced consistency. During full connectivity,
replication is nearly real-time between the primary and secondary clusters.
Degraded connectivity between a primary and secondary does not impact the
primary's ability to service requests, and the secondary will continue to
service reads on last-known data.
- **Conflict Free**: Certain replication techniques allow for potential write
conflicts to take place. Particularly, any active/active configuration where
writes are allowed to multiple sites require a conflict resolution strategy.
This varies from techniques that allow for data loss like last-write-wins, or
techniques that require manual operator resolution like allowing multiple
values per key. We avoid the possibility of conflicts to ensure there is no
data loss or manual intervention required.
- **Transparent to Clients**: Vault replication should be transparent to
clients of Vault, so that existing thin clients work unmodified. The Vault
servers handle the logic of request forwarding to the primary when necessary,
and multi-hop routing is performed internally to ensure requests are
processed.
- **Simple to Operate**: Operating a replicated cluster should be simple to
avoid administrative overhead and potentially introducing security gaps.
Setup of replication is very simple, and secondaries can handle being
arbitrarily behind the primary, avoiding the need for operator intervention
to copy data or snapshot the primary.
# Architecture
The architecture of Vault replication is based on the design goals, focusing on
the intended use cases. When replication is enabled, a cluster is set as either
a _primary_ or _secondary_. The primary cluster is authoritative, and is the
only cluster allowed to perform actions that write to the underlying data
storage, such as modifying policies or secrets. Secondary clusters can service
all other operations, such as reading secrets or sending data through
`transit`, and forward any writes to the primary cluster. Disallowing multiple
primaries ensures the cluster is conflict free and has an authoritative state.
The primary cluster uses log shipping to replicate changes to all of the
secondaries. This ensures writes are visible globally in near real-time when
there is full network connectivity. If a secondary is down or unable to
communicate with the primary, writes are not blocked on the primary and reads
are still serviced on the secondary. This ensures the availability of Vault.
When the secondary is initialized or recovers from degraded connectivity it
will automatically reconcile with the primary.
Lastly, clients can speak to any Vault server without a thick client. If a
client is communicating with a standby instance, the request is automatically
forwarded to an active instance. Secondary clusters will service reads locally
and forward any write requests to the primary cluster. The primary cluster is
able to service all request types.
An important optimization Vault makes is to avoid replication of tokens or
leases between clusters. Policies and secrets are the minority of data managed
by Vault and tend to be relatively stable. Tokens and leases are much more
dynamic, as they are created and expire rapidly. Keeping tokens and leases
locally reduces the amount of data that needs to be replicated, and distributes
the work of TTL management between the clusters. The caveat is that clients
will need to re-authenticate if they switch the Vault cluster they are
communicating with.
# Implementation details
It is important to understand the high-level architecture of replication to
ensure the trade-offs are appropriate for your use case. The implementation
details may be useful for those who are curious or want to understand more
about the performance characteristics or failure scenarios.
Using replication requires a storage backend that supports transactional
updates, such as Consul. This allows multiple key/value updates to be
performed atomically. Replication uses this to maintain a
[Write-Ahead-Log][wal] (WAL) of all updates, so that the key update happens
atomically with the WAL entry creation. The WALs are then used to perform log
shipping between the Vault clusters. When a secondary is closely synchronized
with a primary, Vault directly streams new WALs to be applied, providing near
real-time replication. A bounded set of WALs are maintained for the
secondaries, and older WALs are garbage collected automatically.
When a secondary is initialized or is too far behind the primary there may not
be enough WALs to synchronize. To handle this scenario, Vault maintains a
[merkle index][merkle] of the encrypted keys. Any time a key is updated or
deleted, the merkle index is updated to reflect the change. When a secondary
needs to reconcile with a primary, they compare their merkle indexes to
determine which keys are out of sync. The structure of the index allows this to
be done very efficiently, usually requiring only two round trips and a small
amount of data. The secondary uses this information to reconcile and then
switches back into WAL streaming mode.
Performance is an important concern for Vault, so WAL entries are batched and
the merkle index is not flushed to disk with every operation. Instead, the
index is updated in memory for every operation and asynchronously flushed to
disk. As a result, a crash or power loss may cause the merkle index to become
out of sync with the underlying keys. Vault uses the [ARIES][aries] recovery
algorithm to ensure the consistency of the index under those failure
conditions.
Log shipping traditionally requires the WAL stream to be synchronized, which
can introduce additional complexity when a new primary cluster is promoted.
Vault uses the merkle index as the source of truth, allowing the WAL streams to
be completely distinct and unsynchronized. This simplifies administration of
Vault Replication for operators.
## Addressing
### Cluster addresses on the primary
When a cluster is enabled as replication primary, it persists a cluster
definition to storage, under `core/cluster/replicated/info` or
`core/cluster/replicated-dr/info`. An optional field of the cluster definition
is `primary_cluster_addr`, which may be provided in the enable request.
Performance standbys regularly issue heartbeat RPC requests to the active node, and
one of the arguments to the RPC is the local node's `cluster_addr`. The
primary active node retains these cluster addresses received from its peers in
an in-memory cache named `clusterPeerClusterAddrsCache` with a 15s expiry time.
### Cluster addresses on the secondary
When a secondary is enabled, its replication activation token (obtained from
the primary) includes a `primary_cluster_addr` field. This is taken from the
persisted cluster definition created when the primary was enabled, or if no
`primary_cluster_addr` was provided at that time, the token contains the
`cluster_addr` of the current active node in the primary at the time the
activation token is created.
The secondary persists its own version of the cluster definition to storage,
again under `core/cluster/replicated/info` or
`core/cluster/replicated-dr/info`. Here the `primary_cluster_addr` field is
the one obtained from the activation token.
The secondary active node regularly issues heartbeat RPC requests to the
primary active node. In response to these, the primary returns a response
which includes a `ClusterAddrs` field, comprising the contents of its
`clusterPeerClusterAddrsCache` plus the current active node's `cluster_addr`.
The secondary uses the response to both to update its in-memory record of
`known_primary_cluster_addrs`, and in addition it persists them to storage
under `core/primary-addrs/dr` or `core/primary-addrs/perf`. When this happens
it logs the line `"replication: successful heartbeat"`, which includes the
`ClusterAddrs` value obtained in the response.
### Secondary RPC address resolution
gRPC is given a list of target addresses for it to use in performing RPC
requests. gRPC will discover that performance standbys can't service most RPCs, and
will quickly weed out all but the active node cluster address. If the primary
active node changes, gRPC will learn that its address is no longer viable and
will automatically fail over to the new active node, assuming it's one of the
known target addresses.
The secondary runs a background resolver goroutine that, every few seconds,
builds the gRPC target list of addresses for the primary. Its output is logged
at Trace level as `"loaded addresses"`.
To build the primary cluster address list, the resolver goroutine normally
simply concatenates `known_primary_cluster_addrs` with the
`primary_cluster_addr` in the cluster definition in storage.
There are two exceptions to that normal behaviour: the first time the goroutine
goes through its loop, and when gRPC asks for a forced ResolveNow, which
happens when it's unable to perform RPCs on any of its target addresses. In
both these cases, the resolver goroutine issues a special RemoteResolve RPC to
the primary. This RPC is special because unlike all the other replication
RPCs, it can be serviced by performance standbys as well as the active node. In
either case the node will return the `primary_cluster_addr` stored in the
primary's cluster definition, if any, or failing that the current active node's
cluster address. The result of the RemoteResolve call gets included in the
list of target addresses the resolver gives to gRPC for regular RPCs to the
primary.
# Caveats
~> **Mismatched Cluster Versions**: It is not safe to replicate from a newer
version of Vault to an older version. When upgrading replicated clusters,
ensure that upstream clusters are always on older version of Vault than
downstream clusters. See
[Upgrading Vault](/vault/docs/upgrading#replication-installations) for an example.
- **Read-After-Write Consistency**: All write requests are forwarded from
secondaries to the primary cluster in order to avoid potential conflicts.
While replication is near real-time, it is not instantaneous, meaning there
is a potential for a client to write to a secondary and a subsequent read to
return an old value. Secondaries attempt to mask this from an individual
client making subsequent requests by stalling write requests until the write
is replicated or a timeout is reached (2 seconds). If the timeout is reached,
the client will receive a warning. Clients can also take steps to protect
against this, see [Consistency](/vault/docs/enterprise/consistency#mitigations).
- **Stale Reads**: Secondary clusters service reads based on their
locally-replicated data. During normal operation updates from a primary are
received in near real-time by secondaries. However, during an outage or
network service disruption, replication may stall and secondaries may have
stale data. The cluster will automatically recover and reconcile any stale
data once the outage has recovered, but reads in the intervening period may
receive stale data.
[wal]: https://en.wikipedia.org/wiki/Write-ahead_logging
[merkle]: https://en.wikipedia.org/wiki/Merkle_tree
[aries]: https://en.wikipedia.org/wiki/Algorithms_for_Recovery_and_Isolation_Exploiting_Semantics | vault | layout docs page title Replication description Learn about the details of multi datacenter replication within Vault Replication Vault enterprise Vault Enterprise 0 7 adds support for multi datacenter replication Before using this feature it is useful to understand the intended use cases design goals and high level architecture Replication is based on a primary secondary 1 N model with asynchronous replication focusing on high availability for global deployments The trade offs made in the design and implementation of replication reflect these high level goals Use cases Vault replication is based on a number of common use cases Multi Datacenter Deployments A common challenge is providing Vault to applications across many datacenters in a highly available manner Running a single Vault cluster imposes high latency of access for remote clients availability loss or outages during connectivity failures and limits scalability Backup Sites Implementing a robust business continuity plan around the loss of a primary datacenter requires the ability to quickly and easily fail to a hot backup site Scaling Throughput Applications that use Vault for Encryption as a Service or cryptographic offload may generate a very high volume of requests for Vault Replicating keys between multiple clusters allows load to be distributed across additional servers to scale request throughput Design goals Based on the use cases for Vault Replication we had a number of design goals for the implementation Availability Global deployments of Vault require high levels of availability and can tolerate reduced consistency During full connectivity replication is nearly real time between the primary and secondary clusters Degraded connectivity between a primary and secondary does not impact the primary s ability to service requests and the secondary will continue to service reads on last known data Conflict Free Certain replication techniques allow for potential write conflicts to take place Particularly any active active configuration where writes are allowed to multiple sites require a conflict resolution strategy This varies from techniques that allow for data loss like last write wins or techniques that require manual operator resolution like allowing multiple values per key We avoid the possibility of conflicts to ensure there is no data loss or manual intervention required Transparent to Clients Vault replication should be transparent to clients of Vault so that existing thin clients work unmodified The Vault servers handle the logic of request forwarding to the primary when necessary and multi hop routing is performed internally to ensure requests are processed Simple to Operate Operating a replicated cluster should be simple to avoid administrative overhead and potentially introducing security gaps Setup of replication is very simple and secondaries can handle being arbitrarily behind the primary avoiding the need for operator intervention to copy data or snapshot the primary Architecture The architecture of Vault replication is based on the design goals focusing on the intended use cases When replication is enabled a cluster is set as either a primary or secondary The primary cluster is authoritative and is the only cluster allowed to perform actions that write to the underlying data storage such as modifying policies or secrets Secondary clusters can service all other operations such as reading secrets or sending data through transit and forward any writes to the primary cluster Disallowing multiple primaries ensures the cluster is conflict free and has an authoritative state The primary cluster uses log shipping to replicate changes to all of the secondaries This ensures writes are visible globally in near real time when there is full network connectivity If a secondary is down or unable to communicate with the primary writes are not blocked on the primary and reads are still serviced on the secondary This ensures the availability of Vault When the secondary is initialized or recovers from degraded connectivity it will automatically reconcile with the primary Lastly clients can speak to any Vault server without a thick client If a client is communicating with a standby instance the request is automatically forwarded to an active instance Secondary clusters will service reads locally and forward any write requests to the primary cluster The primary cluster is able to service all request types An important optimization Vault makes is to avoid replication of tokens or leases between clusters Policies and secrets are the minority of data managed by Vault and tend to be relatively stable Tokens and leases are much more dynamic as they are created and expire rapidly Keeping tokens and leases locally reduces the amount of data that needs to be replicated and distributes the work of TTL management between the clusters The caveat is that clients will need to re authenticate if they switch the Vault cluster they are communicating with Implementation details It is important to understand the high level architecture of replication to ensure the trade offs are appropriate for your use case The implementation details may be useful for those who are curious or want to understand more about the performance characteristics or failure scenarios Using replication requires a storage backend that supports transactional updates such as Consul This allows multiple key value updates to be performed atomically Replication uses this to maintain a Write Ahead Log wal WAL of all updates so that the key update happens atomically with the WAL entry creation The WALs are then used to perform log shipping between the Vault clusters When a secondary is closely synchronized with a primary Vault directly streams new WALs to be applied providing near real time replication A bounded set of WALs are maintained for the secondaries and older WALs are garbage collected automatically When a secondary is initialized or is too far behind the primary there may not be enough WALs to synchronize To handle this scenario Vault maintains a merkle index merkle of the encrypted keys Any time a key is updated or deleted the merkle index is updated to reflect the change When a secondary needs to reconcile with a primary they compare their merkle indexes to determine which keys are out of sync The structure of the index allows this to be done very efficiently usually requiring only two round trips and a small amount of data The secondary uses this information to reconcile and then switches back into WAL streaming mode Performance is an important concern for Vault so WAL entries are batched and the merkle index is not flushed to disk with every operation Instead the index is updated in memory for every operation and asynchronously flushed to disk As a result a crash or power loss may cause the merkle index to become out of sync with the underlying keys Vault uses the ARIES aries recovery algorithm to ensure the consistency of the index under those failure conditions Log shipping traditionally requires the WAL stream to be synchronized which can introduce additional complexity when a new primary cluster is promoted Vault uses the merkle index as the source of truth allowing the WAL streams to be completely distinct and unsynchronized This simplifies administration of Vault Replication for operators Addressing Cluster addresses on the primary When a cluster is enabled as replication primary it persists a cluster definition to storage under core cluster replicated info or core cluster replicated dr info An optional field of the cluster definition is primary cluster addr which may be provided in the enable request Performance standbys regularly issue heartbeat RPC requests to the active node and one of the arguments to the RPC is the local node s cluster addr The primary active node retains these cluster addresses received from its peers in an in memory cache named clusterPeerClusterAddrsCache with a 15s expiry time Cluster addresses on the secondary When a secondary is enabled its replication activation token obtained from the primary includes a primary cluster addr field This is taken from the persisted cluster definition created when the primary was enabled or if no primary cluster addr was provided at that time the token contains the cluster addr of the current active node in the primary at the time the activation token is created The secondary persists its own version of the cluster definition to storage again under core cluster replicated info or core cluster replicated dr info Here the primary cluster addr field is the one obtained from the activation token The secondary active node regularly issues heartbeat RPC requests to the primary active node In response to these the primary returns a response which includes a ClusterAddrs field comprising the contents of its clusterPeerClusterAddrsCache plus the current active node s cluster addr The secondary uses the response to both to update its in memory record of known primary cluster addrs and in addition it persists them to storage under core primary addrs dr or core primary addrs perf When this happens it logs the line replication successful heartbeat which includes the ClusterAddrs value obtained in the response Secondary RPC address resolution gRPC is given a list of target addresses for it to use in performing RPC requests gRPC will discover that performance standbys can t service most RPCs and will quickly weed out all but the active node cluster address If the primary active node changes gRPC will learn that its address is no longer viable and will automatically fail over to the new active node assuming it s one of the known target addresses The secondary runs a background resolver goroutine that every few seconds builds the gRPC target list of addresses for the primary Its output is logged at Trace level as loaded addresses To build the primary cluster address list the resolver goroutine normally simply concatenates known primary cluster addrs with the primary cluster addr in the cluster definition in storage There are two exceptions to that normal behaviour the first time the goroutine goes through its loop and when gRPC asks for a forced ResolveNow which happens when it s unable to perform RPCs on any of its target addresses In both these cases the resolver goroutine issues a special RemoteResolve RPC to the primary This RPC is special because unlike all the other replication RPCs it can be serviced by performance standbys as well as the active node In either case the node will return the primary cluster addr stored in the primary s cluster definition if any or failing that the current active node s cluster address The result of the RemoteResolve call gets included in the list of target addresses the resolver gives to gRPC for regular RPCs to the primary Caveats Mismatched Cluster Versions It is not safe to replicate from a newer version of Vault to an older version When upgrading replicated clusters ensure that upstream clusters are always on older version of Vault than downstream clusters See Upgrading Vault vault docs upgrading replication installations for an example Read After Write Consistency All write requests are forwarded from secondaries to the primary cluster in order to avoid potential conflicts While replication is near real time it is not instantaneous meaning there is a potential for a client to write to a secondary and a subsequent read to return an old value Secondaries attempt to mask this from an individual client making subsequent requests by stalling write requests until the write is replicated or a timeout is reached 2 seconds If the timeout is reached the client will receive a warning Clients can also take steps to protect against this see Consistency vault docs enterprise consistency mitigations Stale Reads Secondary clusters service reads based on their locally replicated data During normal operation updates from a primary are received in near real time by secondaries However during an outage or network service disruption replication may stall and secondaries may have stale data The cluster will automatically recover and reconcile any stale data once the outage has recovered but reads in the intervening period may receive stale data wal https en wikipedia org wiki Write ahead logging merkle https en wikipedia org wiki Merkle tree aries https en wikipedia org wiki Algorithms for Recovery and Isolation Exploiting Semantics |
vault Vault imposes fixed upper limits on the size of certain fields and Vault limits and maximums page title Limits and Maximums layout docs Learn about the maximum number of objects within Vault objects and configurable limits on others Vault also has upper | ---
layout: docs
page_title: Limits and Maximums
description: Learn about the maximum number of objects within Vault.
---
# Vault limits and maximums
Vault imposes fixed upper limits on the size of certain fields and
objects, and configurable limits on others. Vault also has upper
bounds that are a consequence of its underlying storage. This page
attempts to collect these limits, to assist in planning Vault
deployments.
In some cases, the system will show performance problems in advance of
the absolute limits being reached.
## Storage-Related limits
### Storage entry size
@include 'storage-entry-size.mdx'
Many of the other limits within Vault derive from the maximum size of
a storage entry, as described in the next sections. It is possible to
recover from an error where a storage entry has reached its maximum
size by reconfiguring Vault or Consul to a larger maximum storage
entry.
### Mount point limits
All secret engine mount points, and all auth mount points, must each fit
within a single storage entry. Each JSON object describing a mount
takes about 500 bytes, but is stored in compressed form at a typical cost of
about 75 bytes. Each of (1) auth mounts, (2) secret engine mount points,
(3) local-only auth methods, and (4) local-only secret engine mounts are
stored separately, so the limit applies to each independently.
| | Consul default (512 KiB) | Integrated storage default (1 MiB) |
| -------------------------------------------- | ------------------------ | ---------------------------------- |
| Maximum number of secret engine mount points | ~7000 | ~14000 |
| Maximum number of enabled auth methods | ~7000 | ~14000 |
| Maximum mount point length | no enforced limit | no enforced limit |
Specifying distinct per-mount options, or using long mount point paths, can
increase the space required per mount.
The number of mount points can be monitored by reading the
[`sys/auth`](/vault/api-docs/system/auth) and
[`sys/mounts`](/vault/api-docs/system/mounts) endpoints from the root namespace and
similar sub-paths for namespaces respectively, like: `namespace1/sys/auth`,
`namespace1/sys/mounts`, etc.
Alternatively, use the
[`vault.core.mount_table.num_entries`](/vault/docs/internals/telemetry/metrics/core-system#vault-core-mount_table-num_entries)
and
[`vault.core.mount_table.size`](/vault/docs/internals/telemetry/metrics/core-system#vault-core-mount_table-size)
telemetry metrics to monitor the number of mount points and size of each mount table.
### Namespace limits
@include 'namespace-limits.mdx'
### Entity and group limits
The metadata that may be attached to an identity entity or an entity group
has the following constraints:
| | Limit |
| ------------------------------------- | --------- |
| Number of key-value pairs in metadata | 64 |
| Metadata key size | 128 bytes |
| Metadata value size | 512 bytes |
Vault shards the entities across 256 storage entries. This creates a
hard limit of 128MiB storage space used for entities on Consul, or
256MiB on integrated storage with its default settings. Entity aliases
are stored inline in the Entity objects and so consume the same pool
of storage. Entity definitions are compressed within each storage
entry, and the pre-compression size varies with the number of entity
aliases and the amount of metadata. Minimally-populated entities
about 200 bytes after compression.
Group definitions are stored separately, in their own pool of 256
storage entries. The size of each group object depends on the number
of members and the amount of metadata. Group aliases and group
membership information is stored inline in each Group object. A group
with no metadata, holding 10 entities, will use about 500 bytes per
group. A group holding 100 entities would instead consume about 4,000
bytes.
The following table shows a best-case estimate and a more conservative
estimate for entities and groups. The number is slightly less than the
amount that fits in one shard, to reflect the fact that the first
shard to fill up will start inducing failures. This maximum will
decrease if each entity has a large amount of metadata, or if each
group has a large number of members.
| | Consul default (512 KiB) | Integrated storage default (1 MiB) |
| ---------------------------------------------------------------------------------------- | ------------------------ | ---------------------------------- |
| Maximum number of identity entities (best case, 200 bytes per entity) | ~610,000 | ~1,250,000 |
| Maximum number of identity entities (conservative case, 500 bytes per entity) | ~250,000 | ~480,000 |
| Maximum number of identity entities (maximum permitted metadata, 41160 bytes per entity) | 670 | 2,400 |
| Maximum number of groups (10 entities per group) | ~250,000 | ~480,000 |
| Maximum number of groups (100 entities per group) | ~22,000 | ~50,000 |
| Maximum number of members in a group | ~11,500 | ~23,000 |
The number of entities can be monitored using Vault's [telemetry](/vault/docs/internals/telemetry#token-identity-and-lease-metrics); see `vault.identity.num_entities` (total) or `vault.identity.entities.count` (by namespace).
The cost of entity and group updates grows as the number of objects in
each shard increases. This cost can be monitored via the
`vault.identity.upsert_entity_txn` and
the `vault.identity.upsert_group_txn` metrics.
Very large internal groups should be avoided (more than 1000 members),
because the membership list in a group must reside in a single storage entry.
Instead, consider using [external groups](/vault/docs/concepts/identity#external-vs-internal-groups) or split the group up into multiple sub-groups.
### Token limits
One storage entry is used per token; there is thus no
upper bound on the number of active tokens. There are no restrictions on
the token metadata field, other than the entire token must fit into one
storage entry:
| | Limit |
| ------------------------------------- | -------- |
| Number of key-value pairs in metadata | no limit |
| Metadata key size | no limit |
| Metadata value size | no limit |
| Total size of token metadata | 512 KiB |
### Policy limits
The maximum size of a policy is limited by the storage
entry size. Policy lists that appear in tokens or entities must fit
within a single storage entry.
| | Consul default (512 KiB) | Integrated storage default (1 MiB) |
| ---------------------------------------------- | ------------------------ | ---------------------------------- |
| Maximum policy size | 512 KiB | 1 MiB |
| Maximum number of policies per namespace | no limit | no limit |
| Maximum number of policies per token | ~14,000 | ~28,000 |
| Maximum number of policies per entity or group | ~14,000 | ~28,000 |
Each time a token is used, Vault must assemble the collection of
policies attached to that token, to the entity, to any groups that the
entity belongs to, and recursively to any groups that contain those groups.
Very large numbers of policies are possible, but can cause Vault’s
response time to increase. You can monitor the
[`vault.core.fetch_acl_and_token`](/vault/docs/internals/telemetry#core-metrics)
metric to determine if the time required to assemble an access control list
is becoming excessive.
### Versioned key-value store (kv-v2 secret engine)
| | Limit |
| -------------------------------------------------------- | ---------------------------------------------------------- |
| Number of secrets | no limit, up to available storage capacity |
| Maximum size of one version of a secret | slightly less than one storage entry (512 KiB or 1024 KiB) |
| Number of versions of a secret | default 10; configurable per-secret or per-mount |
| Maximum number of versions (not checked when configured) | at least 24,000 |
Each version of a secret must fit in a single storage entry; the
key-value pairs are converted to JSON before storage.
Version metadata consumes 21 bytes per version and must fit in a
single storage entry, separate from the stored data.
Each secret also has version-agnostic metadata. This data can contain a `custom_metadata` field of
user-provided key-value pairs. Vault imposes the following custom metadata limits:
| | Limit |
| ----------------------------------------- | --------- |
| Number of custom metadata key-value pairs | 64 |
| Custom metadata key size | 128 bytes |
| Custom metadata value size | 512 bytes |
### Transit secret engine
The maximum size of a Transit ciphertext or plaintext is limited by Vault's
maximum request size, as described [below](#request-size).
All archived versions of a single key must fit in a single storage entry.
This limit depends on the key size.
| Key length | Consul default (512 KiB) | Integrated storage default (1 MiB) |
| -------------------- | ------------------------ | ---------------------------------- |
| aes128-gcm96 keys | 2008 | 4017 |
| aes256-gcm96 keys | 1865 | 3731 |
| chacha-poly1305 keys | 1865 | 3731 |
| ed25519 keys | 1420 | 2841 |
| ecdsa-p256 keys | 817 | 1635 |
| ecdsa-p384 keys | 659 | 1318 |
| ecdsa-p523 keys | 539 | 1078 |
| 1024-bit RSA keys | 169 | 333 |
| 2048-bit RSA keys | 116 | 233 |
| 4096-bit RSA keys | 89 | 178 |
## Other limits
### Request size
The maximum size of an HTTP request sent to Vault is limited by
the `max_request_size` option in the [listener stanza](/vault/docs/configuration/listener/tcp). It defaults to 32 MiB. This value, minus the overhead of
the HTTP request itself, places an upper bound on any Transit operation,
and on the maximum size of any key-value secrets.
### Request duration
The maximum duration of a Vault operation is
[`max_request_duration`](/vault/docs//configuration/listener/tcp), which defaults to
90 seconds. If a particular secret engine takes longer than this to perform an
operation on a remote service, the Vault client will see a failure.
The environment variable [`VAULT_CLIENT_TIMEOUT`](/vault/docs/commands#vault_client_timeout) sets a client-side maximum duration as well,
which is 60 seconds by default.
### Cluster and replication limits
There are no implementation limits on the maximum size of a cluster,
or the maximum number of replicas associated with a primary. However,
each replica or performance standby adds considerable overhead to the
active node, as each write must be duplicated to all standbys. The overhead of
resyncing multiple replicas at once is also high.
Monitor the active Vault node's CPU and network utilization, as well as
the lag between the last WAL and replica WAL, to determine if the
maximum number of replicas has been exceeded.
| | Limit |
| -------------------------------------- | -------------------------------------- |
| Maximum cluster size | no limit, up to active node capability |
| Maximum number of DR replicas | no limit, up to active node capability |
| Maximum number of performance replicas | no limit, up to active node capability |
### Lease limits
A systemwide [maximum TTL](/vault/docs/configuration#max_lease_ttl), and a
[maximum TTL per mount point](/vault/api-docs/system/mounts#max_lease_ttl-1) can be
configured.
Although no technical maximum exists, high lease counts can cause
degradation in system performance. We recommend short default
time-to-live values on tokens and leases to avoid a large backlog of
unexpired leases, or a large number of simultaneous expirations.
| | Limit |
| ---------------------------------- | ------------------------- |
| Maximum number of leases | advisory limit at 256,000 |
| Maximum duration of lease or token | 768 hours by default |
The current number of unexpired leases can be monitored via the
[`vault.expire.num_leases`](/vault/docs/internals/telemetry#token-identity-and-lease-metrics) metric.
### Transform limits
The Transform secret engine obeys the [FF3-1 minimum and maximum sizes
on the length of an input](/vault/docs/secrets/transform#input-limits), which
are a function of the alphabet size.
### External plugin limits
The [plugin system](/vault/docs/plugins) launches a separate process
initiated by Vault that communicates over RPC. For each secret engine and auth
method that's enabled as an external plugin, Vault will spawn a process on the
host system. For the Database Secrets Engines, external database plugins will
spawn a process for every configured connection.
Regardless of plugin type, each of these processes will incur resource overhead
on the system, including but not limited to resources such as CPU, memory,
networking, and file descriptors. There's no specific limit on the number
secrets engines, auth methods, or database configured connections that can be
enabled. This ultimately depends on the particular plugin resource utilization,
the extent to which that plugin is being called, and the available resources on
the system. For plugins of the same type, each additional process will incur a
roughly linear increase in resource utilization. This assumes the usage of each
plugin of the same type is similar. | vault | layout docs page title Limits and Maximums description Learn about the maximum number of objects within Vault Vault limits and maximums Vault imposes fixed upper limits on the size of certain fields and objects and configurable limits on others Vault also has upper bounds that are a consequence of its underlying storage This page attempts to collect these limits to assist in planning Vault deployments In some cases the system will show performance problems in advance of the absolute limits being reached Storage Related limits Storage entry size include storage entry size mdx Many of the other limits within Vault derive from the maximum size of a storage entry as described in the next sections It is possible to recover from an error where a storage entry has reached its maximum size by reconfiguring Vault or Consul to a larger maximum storage entry Mount point limits All secret engine mount points and all auth mount points must each fit within a single storage entry Each JSON object describing a mount takes about 500 bytes but is stored in compressed form at a typical cost of about 75 bytes Each of 1 auth mounts 2 secret engine mount points 3 local only auth methods and 4 local only secret engine mounts are stored separately so the limit applies to each independently Consul default 512 KiB Integrated storage default 1 MiB Maximum number of secret engine mount points 7000 14000 Maximum number of enabled auth methods 7000 14000 Maximum mount point length no enforced limit no enforced limit Specifying distinct per mount options or using long mount point paths can increase the space required per mount The number of mount points can be monitored by reading the sys auth vault api docs system auth and sys mounts vault api docs system mounts endpoints from the root namespace and similar sub paths for namespaces respectively like namespace1 sys auth namespace1 sys mounts etc Alternatively use the vault core mount table num entries vault docs internals telemetry metrics core system vault core mount table num entries and vault core mount table size vault docs internals telemetry metrics core system vault core mount table size telemetry metrics to monitor the number of mount points and size of each mount table Namespace limits include namespace limits mdx Entity and group limits The metadata that may be attached to an identity entity or an entity group has the following constraints Limit Number of key value pairs in metadata 64 Metadata key size 128 bytes Metadata value size 512 bytes Vault shards the entities across 256 storage entries This creates a hard limit of 128MiB storage space used for entities on Consul or 256MiB on integrated storage with its default settings Entity aliases are stored inline in the Entity objects and so consume the same pool of storage Entity definitions are compressed within each storage entry and the pre compression size varies with the number of entity aliases and the amount of metadata Minimally populated entities about 200 bytes after compression Group definitions are stored separately in their own pool of 256 storage entries The size of each group object depends on the number of members and the amount of metadata Group aliases and group membership information is stored inline in each Group object A group with no metadata holding 10 entities will use about 500 bytes per group A group holding 100 entities would instead consume about 4 000 bytes The following table shows a best case estimate and a more conservative estimate for entities and groups The number is slightly less than the amount that fits in one shard to reflect the fact that the first shard to fill up will start inducing failures This maximum will decrease if each entity has a large amount of metadata or if each group has a large number of members Consul default 512 KiB Integrated storage default 1 MiB Maximum number of identity entities best case 200 bytes per entity 610 000 1 250 000 Maximum number of identity entities conservative case 500 bytes per entity 250 000 480 000 Maximum number of identity entities maximum permitted metadata 41160 bytes per entity 670 2 400 Maximum number of groups 10 entities per group 250 000 480 000 Maximum number of groups 100 entities per group 22 000 50 000 Maximum number of members in a group 11 500 23 000 The number of entities can be monitored using Vault s telemetry vault docs internals telemetry token identity and lease metrics see vault identity num entities total or vault identity entities count by namespace The cost of entity and group updates grows as the number of objects in each shard increases This cost can be monitored via the vault identity upsert entity txn and the vault identity upsert group txn metrics Very large internal groups should be avoided more than 1000 members because the membership list in a group must reside in a single storage entry Instead consider using external groups vault docs concepts identity external vs internal groups or split the group up into multiple sub groups Token limits One storage entry is used per token there is thus no upper bound on the number of active tokens There are no restrictions on the token metadata field other than the entire token must fit into one storage entry Limit Number of key value pairs in metadata no limit Metadata key size no limit Metadata value size no limit Total size of token metadata 512 KiB Policy limits The maximum size of a policy is limited by the storage entry size Policy lists that appear in tokens or entities must fit within a single storage entry Consul default 512 KiB Integrated storage default 1 MiB Maximum policy size 512 KiB 1 MiB Maximum number of policies per namespace no limit no limit Maximum number of policies per token 14 000 28 000 Maximum number of policies per entity or group 14 000 28 000 Each time a token is used Vault must assemble the collection of policies attached to that token to the entity to any groups that the entity belongs to and recursively to any groups that contain those groups Very large numbers of policies are possible but can cause Vault s response time to increase You can monitor the vault core fetch acl and token vault docs internals telemetry core metrics metric to determine if the time required to assemble an access control list is becoming excessive Versioned key value store kv v2 secret engine Limit Number of secrets no limit up to available storage capacity Maximum size of one version of a secret slightly less than one storage entry 512 KiB or 1024 KiB Number of versions of a secret default 10 configurable per secret or per mount Maximum number of versions not checked when configured at least 24 000 Each version of a secret must fit in a single storage entry the key value pairs are converted to JSON before storage Version metadata consumes 21 bytes per version and must fit in a single storage entry separate from the stored data Each secret also has version agnostic metadata This data can contain a custom metadata field of user provided key value pairs Vault imposes the following custom metadata limits Limit Number of custom metadata key value pairs 64 Custom metadata key size 128 bytes Custom metadata value size 512 bytes Transit secret engine The maximum size of a Transit ciphertext or plaintext is limited by Vault s maximum request size as described below request size All archived versions of a single key must fit in a single storage entry This limit depends on the key size Key length Consul default 512 KiB Integrated storage default 1 MiB aes128 gcm96 keys 2008 4017 aes256 gcm96 keys 1865 3731 chacha poly1305 keys 1865 3731 ed25519 keys 1420 2841 ecdsa p256 keys 817 1635 ecdsa p384 keys 659 1318 ecdsa p523 keys 539 1078 1024 bit RSA keys 169 333 2048 bit RSA keys 116 233 4096 bit RSA keys 89 178 Other limits Request size The maximum size of an HTTP request sent to Vault is limited by the max request size option in the listener stanza vault docs configuration listener tcp It defaults to 32 MiB This value minus the overhead of the HTTP request itself places an upper bound on any Transit operation and on the maximum size of any key value secrets Request duration The maximum duration of a Vault operation is max request duration vault docs configuration listener tcp which defaults to 90 seconds If a particular secret engine takes longer than this to perform an operation on a remote service the Vault client will see a failure The environment variable VAULT CLIENT TIMEOUT vault docs commands vault client timeout sets a client side maximum duration as well which is 60 seconds by default Cluster and replication limits There are no implementation limits on the maximum size of a cluster or the maximum number of replicas associated with a primary However each replica or performance standby adds considerable overhead to the active node as each write must be duplicated to all standbys The overhead of resyncing multiple replicas at once is also high Monitor the active Vault node s CPU and network utilization as well as the lag between the last WAL and replica WAL to determine if the maximum number of replicas has been exceeded Limit Maximum cluster size no limit up to active node capability Maximum number of DR replicas no limit up to active node capability Maximum number of performance replicas no limit up to active node capability Lease limits A systemwide maximum TTL vault docs configuration max lease ttl and a maximum TTL per mount point vault api docs system mounts max lease ttl 1 can be configured Although no technical maximum exists high lease counts can cause degradation in system performance We recommend short default time to live values on tokens and leases to avoid a large backlog of unexpired leases or a large number of simultaneous expirations Limit Maximum number of leases advisory limit at 256 000 Maximum duration of lease or token 768 hours by default The current number of unexpired leases can be monitored via the vault expire num leases vault docs internals telemetry token identity and lease metrics metric Transform limits The Transform secret engine obeys the FF3 1 minimum and maximum sizes on the length of an input vault docs secrets transform input limits which are a function of the alphabet size External plugin limits The plugin system vault docs plugins launches a separate process initiated by Vault that communicates over RPC For each secret engine and auth method that s enabled as an external plugin Vault will spawn a process on the host system For the Database Secrets Engines external database plugins will spawn a process for every configured connection Regardless of plugin type each of these processes will incur resource overhead on the system including but not limited to resources such as CPU memory networking and file descriptors There s no specific limit on the number secrets engines auth methods or database configured connections that can be enabled This ultimately depends on the particular plugin resource utilization the extent to which that plugin is being called and the available resources on the system For plugins of the same type each additional process will incur a roughly linear increase in resource utilization This assumes the usage of each plugin of the same type is similar |
vault Recommended patterns Help keep your Vault environments operating effectively by implementing the following best practice so you avoid common anti patterns page title Recommended patterns layout docs Follow these recommended patterns to effectively operate Vault | ---
layout: docs
page_title: Recommended patterns
description: Follow these recommended patterns to effectively operate Vault.
---
# Recommended patterns
Help keep your Vault environments operating effectively by implementing the following best practice so you avoid common anti-patterns.
| Description | Applicable Vault edition |
|--- |--- |
| [Adjust the default lease time](#adjust-the-default-lease-time) | All |
| [Use identity entities for accurate client count](#use-identity-entities-for-accurate-client-count) | Enterprise, HCP |
| [Increase IOPS](#increase-iops) | Enterprise, Community |
| [Enable disaster recovery](#enable-disaster-recovery) | Enterprise |
| [Test disaster recovery](#test-disaster-recovery) | Enterprise |
| [Improve upgrade cadence](#improve-upgrade-cadence) | Enterprise, Community |
| [Test before upgrades](#test-before-upgrades) | Enterprise, Community |
| [Rotate audit device logs](#rotate-audit-device-logs) | Enterprise, Community |
| [Monitor metrics](#monitor-metrics) | Enterprise, Community |
| [Establish usage baseline](#establish-usage-baseline) | Enterprise, Community |
| [Minimize root token use](#minimize-root-token-use) | All |
| [Rekey when necessary](#rekey-when-necessary) | All |
## Adjust the default lease time
The default lease time in Vault is 32 days or 768 hours. This time allows for some operations, such as re-authentication or renewal.
See [lease](/vault/docs/concepts/lease) documentation for more information.
**Recommended pattern:**
You should tune the lease TTL value for your needs. Vault holds leases in memory until the lease expires.
We recommend keeping TTLs as short as the use case will allow.
- [Auth tune](/vault/docs/commands/auth/tune)
- [Secrets tune](/vault/docs/commands/secrets/tune)
<Note>
Tuning or adjusting TTLs does not retroactively affect tokens that were issued. New tokens must be issued after tuning TTLs.
</Note>
**Anti-pattern issue:**
If you create leases without changing the default time-to-live (TTL), leases will live in Vault until the default lease time is up.
Depending on your infrastructure and available system memory, using the default or long TTL may cause performance issues as Vault stores
leases in memory.
## Use identity entities for accurate client count
Each Vault client may have multiple accounts with the auth methods enabled on the Vault server.

**Recommended pattern:**
Since each token adds to the client count, and each unique authentication issues a token, you should use identity entities to create aliases that connect each login to a single identity.
- [Client count](/vault/docs/concepts/client-count)
- [Vault identity concepts](/vault/docs/concepts/identity)
- [Vault Identity secrets engine](/vault/docs/secrets/identity)
- [Identity: Entities and groups tutorial](/vault/tutorials/auth-methods/identity)
**Anti-pattern issue:**
When you do not use identity entities, each new client is counted as a separate identity when using another auth method not linked to the user's entity.
## Increase IOPS
IOPS (input/output operations per second) measures performance for Vault cluster members. Vault is bound by the IO limits of the storage backend rather than the compute requirements.
**Recommended pattern:**
Use the HashiCorp reference guidelines for Vault servers' hardware sizing and network considerations.
- [Vault with Integrated storage reference architecture](/vault/tutorials/day-one-raft/raft-reference-architecture#system-requirements)
- [Performance tuning](/vault/tutorials/operations/performance-tuning)
- [Transform secrets engine](/vault/docs/concepts/transform)
<Note>
Depending on the client count, the Transform (Enterprise) and Transit secret engines can be resource-intensive.
</Note>
**Anti-pattern issue:**
Limited IOPS can significantly degrade Vault’s performance.
## Enable disaster recovery
HashiCorp Vault's (HA) highly available [Integrated storage (Raft)](/vault/docs/concepts/integrated-storage)
backend provides intra-cluster data replication across cluster members. Integrated Storage provides Vault with
horizontal scalability and failure tolerance, but it does not provide backup for the entire cluster. Not utilizing
disaster recovery for your production environment will negatively impact your organization's Recovery Point
Objective (RPO) and Recovery Time Objective (RTO).
**Recommended pattern:**
For cluster-wide issues (i.e., network connectivity), Vault Enterprise Disaster Recovery (DR) replication
provides a warm standby cluster containing all primary cluster data. The DR cluster does not service reads
or writes but you can promote it to replace the primary cluster when needed.
- [Disaster recovery replication setup](/vault/tutorials/day-one-raft/disaster-recovery)
- [Disaster recovery (DR) replication](/vault/docs/enterprise/replication#disaster-recovery-dr-replication)
- [DR replication API documentation](/vault/api-docs/system/replication/replication-dr)
We also recommend periodically creating data snapshots to protect against data corruption.
- [Vault data backup standard procedure](/vault/tutorials/standard-procedures/sop-backup)
- [Automated integrated storage snapshots](/vault/docs/enterprise/automated-integrated-storage-snapshots)
- [/sys/storage/raft/snapshot-auto](/vault/api-docs/system/storage/raftautosnapshots)
**Anti-pattern issue:**
If you do not enable disaster recovery and catastrophic failure occurs, your use cases will encounter longer downtime duration and costs associated with not serving Vault clients in your environment.
## Test disaster recovery
Your disaster recovery (DR) solution is a key part of your overall disaster recovery plan.
Designing and configuring your Vault disaster recovery solution is only the first step. You also need to validate the DR solution, as not doing so can negatively impact your organization's Recovery Point Objective (RPO) and Recovery Time Objective (RTO).
**Recommended pattern:**
Vault's Disaster Recovery (DR) replication mode provides a warm standby for
failover if the primary cluster experiences catastrophic failure. You should
periodically test the disaster recovery replication cluster by completing the
failover and failback procedure.
- [Vault disaster recovery replication failover and failback tutorial](/vault/tutorials/enterprise/disaster-recovery-replication-failover)
- [Vault Enterprise replication](/vault/docs/enterprise/replication)
- [Monitoring Vault replication](/vault/tutorials/monitoring/monitor-replication)
You should establish standard operating procedures for restoring a Vault cluster from a snapshot. The restoration methods following a DR situation would be in response to data corruption or sabotage, which Disaster Recovery Replication might be unable to protect against.
- [Standard procedure for restoring a Vault cluster](/vault/tutorials/standard-procedures/sop-restore)
**Anti-pattern issue:**
If you don't test your disaster recovery solution, your key stakeholders will not feel confident they can effectively perform the disaster recovery plan. Testing the DR solution also helps your team to remove uncertainty around recovering the system during an outage.
## Improve upgrade cadence
While it might be easy to upgrade Vault whenever you have capacity, not having a frequent upgrade cadence can impact your Vault performance and security.
**Recommended pattern:**
We recommend upgrading to our latest version of Vault. Subscribe to the releases in [Vault's GitHub repository](https://github.com/hashicorp/vault), and notifications from [HashiCorp Vault discuss](https://discuss.hashicorp.com/c/release-notifications/57), will inform you when we release a new Vault version.
- [Vault upgrade guides](/vault/docs/upgrading)
- [Vault feature deprecation notice and plans](/vault/docs/deprecation)
**Anti-pattern issue:**
When you do not keep a regular upgrade cadence, your Vault environment could be missing key features or improvements.
- Missing patches for bugs or vulnerabilities as documented in the [CHANGELOG](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md).
- New features to improve workflow.
- Must use version-specific rather than the latest documentation.
- Some educational resourcesrequire a specific minimum Vault version.
- Updates may require a stepped approach that uses an intermediate version before installing the latest binary.
## Test before upgrades
We recommend testing Vault in a sandbox environment before deploying to production.
Although it might be faster to upgrade immediately in production, testing will help identify any compatibility issues.
Be aware of the [CHANGELOG](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md) and account for any new features, improvements, known issues and bug fixes in your testing.
**Recommended pattern:**
Test new Vault versions in sandbox environments before upgrading in production and follow our upgrading documentation.
We recommend adding a testing phase to your standard upgrade procedure.
- [Vault upgrade standard procedure](/vault/tutorials/standard-procedures/sop-upgrade)
- [Upgrading Vault](/vault/docs/upgrading)
**Anti-pattern issue:**
Without adequate testing before upgrading in production, you risk compatibility and performance issues.
<Warning>
This could lead to downtime or degradation in your production Vault environment.
</Warning>
## Rotate audit device logs
Audit devices in Vault maintain a detailed log of every client request and server response.
If you allow the logs for audit devices to run perpetually without rotating you may face a blocked audit device if the filesystem storage becomes exhausted.
**Recommended pattern:**
Inspect and rotate audit logs periodically.
- [Blocked audit devices tutorial](/vault/tutorials/monitoring/blocked-audit-devices)
- [blocked audit devices](/vault/docs/audit#blocked-audit-devices)
**Anti-pattern issue:**
Vault will not respond to requests when audit devices are not enabled to record them.
The audit device can exhaust the local storage if the audit device log is not maintained and rotated over time.
## Monitor metrics
Relying solely on Vault operational logs and data in Vault UI will give you a partial picture of the cluster's performance.
**Recommended pattern:**
Continuous monitoring will allow organizations to detect minor problems and promptly resolve them.
Migrating from reactive to proactive monitoring will help to prevent system failures. Vault has multiple outputs
that help monitor the cluster's activity: audit logs, operational logs, and telemetry data. This data can work
with a SIEM (security information and event management) tool for aggregation, inspection, and alerting capabilities.
- [Telemetry](/vault/docs/internals/telemetry#secrets-engines-metric)
- [Telemetry metrics reference](/vault/tutorials/monitoring/telemetry-metrics-reference)
Adding a monitoring solution:
- [Audit device logs and incident response with elasticsearch](/vault/tutorials/monitoring/audit-elastic-incident-response)
- [Monitor telemetry & audit device log data](/vault/tutorials/monitoring/monitor-telemetry-audit-splunk)
- [Monitor telemetry with Prometheus & Grafana](/vault/tutorials/monitoring/monitor-telemetry-grafana-prometheus)
<Note>
Vault logs to standard output and standard error by default, automatically captured by the systemd journal. You can also instruct Vault to redirect operational log writes to a file.
</Note>
**Anti-pattern issue:**
Having partial insight into cluster activity can leave the business in a reactive state.
## Establish usage baseline
A baseline provides insight into current utilization and thresholds. Telemetry metrics are valuable, especially when monitored over time. You can use telemetry metrics to gather a baseline of cluster activity, while alerts inform you of abnormal activity.
**Recommended pattern:**
Telemetry information can also be streamed directly from Vault to a range of metrics aggregation solutions and
saved for aggregation and inspection.
- [Vault usage metrics](/vault/tutorials/monitoring/usage-metrics)
- [Diagnose server issues](/vault/tutorials/monitoring/diagnose-startup-issues)
**Anti-pattern issue:**
This issue closely relates to the recommended pattern for [monitor metrics](#monitor-metrics).
Telemetry data is
only held in memory for a short period.
## Minimize root token use
Initializing a Vault server emits an initial root token that gives root-level access across all Vault features.
**Recommended pattern:**
We recommend that you revoke the root token after initializing Vault within your environment. If users require elevated access, create access control list policies that grant proper capabilities on the necessary paths in Vault. If your operations require the root token, keep it for the shortest possible time before revoking it.
- [Generate root tokens tutorial](/vault/tutorials/operations/generate-root)
- [Root tokens](/vault/docs/concepts/tokens#root-tokens)
- [Vault policies](/vault/docs/concepts/policies)
**Anti-pattern issue:**
A root token can perform all actions within Vault and never expire. Unrestricted access can give users higher privileges than necessary to all Vault operations and paths. Sharing and providing access to root tokens poses a security risk.
## Rekey when necessary
Vault distributes unsealed keys to stakeholders. A quorum of keys is needed to unlock Vault based on your initialization settings.
**Recommended pattern:**
Vault supports rekeying, and you should establish a workflow for rekeying when necessary.
- [Rekeying & rotating Vault](/vault/tutorials/operations/rekeying-and-rotating)
- [Operator rekey](/vault/docs/commands/operator/rekey)
**Anti-pattern issue:**
If several stakeholders leave the organization, you risk not having the required key shares to meet the unseal quorum, which could result in the loss of the ability to unseal Vault. | vault | layout docs page title Recommended patterns description Follow these recommended patterns to effectively operate Vault Recommended patterns Help keep your Vault environments operating effectively by implementing the following best practice so you avoid common anti patterns Description Applicable Vault edition Adjust the default lease time adjust the default lease time All Use identity entities for accurate client count use identity entities for accurate client count Enterprise HCP Increase IOPS increase iops Enterprise Community Enable disaster recovery enable disaster recovery Enterprise Test disaster recovery test disaster recovery Enterprise Improve upgrade cadence improve upgrade cadence Enterprise Community Test before upgrades test before upgrades Enterprise Community Rotate audit device logs rotate audit device logs Enterprise Community Monitor metrics monitor metrics Enterprise Community Establish usage baseline establish usage baseline Enterprise Community Minimize root token use minimize root token use All Rekey when necessary rekey when necessary All Adjust the default lease time The default lease time in Vault is 32 days or 768 hours This time allows for some operations such as re authentication or renewal See lease vault docs concepts lease documentation for more information Recommended pattern You should tune the lease TTL value for your needs Vault holds leases in memory until the lease expires We recommend keeping TTLs as short as the use case will allow Auth tune vault docs commands auth tune Secrets tune vault docs commands secrets tune Note Tuning or adjusting TTLs does not retroactively affect tokens that were issued New tokens must be issued after tuning TTLs Note Anti pattern issue If you create leases without changing the default time to live TTL leases will live in Vault until the default lease time is up Depending on your infrastructure and available system memory using the default or long TTL may cause performance issues as Vault stores leases in memory Use identity entities for accurate client count Each Vault client may have multiple accounts with the auth methods enabled on the Vault server Entity img vault entity waf1 png Recommended pattern Since each token adds to the client count and each unique authentication issues a token you should use identity entities to create aliases that connect each login to a single identity Client count vault docs concepts client count Vault identity concepts vault docs concepts identity Vault Identity secrets engine vault docs secrets identity Identity Entities and groups tutorial vault tutorials auth methods identity Anti pattern issue When you do not use identity entities each new client is counted as a separate identity when using another auth method not linked to the user s entity Increase IOPS IOPS input output operations per second measures performance for Vault cluster members Vault is bound by the IO limits of the storage backend rather than the compute requirements Recommended pattern Use the HashiCorp reference guidelines for Vault servers hardware sizing and network considerations Vault with Integrated storage reference architecture vault tutorials day one raft raft reference architecture system requirements Performance tuning vault tutorials operations performance tuning Transform secrets engine vault docs concepts transform Note Depending on the client count the Transform Enterprise and Transit secret engines can be resource intensive Note Anti pattern issue Limited IOPS can significantly degrade Vault s performance Enable disaster recovery HashiCorp Vault s HA highly available Integrated storage Raft vault docs concepts integrated storage backend provides intra cluster data replication across cluster members Integrated Storage provides Vault with horizontal scalability and failure tolerance but it does not provide backup for the entire cluster Not utilizing disaster recovery for your production environment will negatively impact your organization s Recovery Point Objective RPO and Recovery Time Objective RTO Recommended pattern For cluster wide issues i e network connectivity Vault Enterprise Disaster Recovery DR replication provides a warm standby cluster containing all primary cluster data The DR cluster does not service reads or writes but you can promote it to replace the primary cluster when needed Disaster recovery replication setup vault tutorials day one raft disaster recovery Disaster recovery DR replication vault docs enterprise replication disaster recovery dr replication DR replication API documentation vault api docs system replication replication dr We also recommend periodically creating data snapshots to protect against data corruption Vault data backup standard procedure vault tutorials standard procedures sop backup Automated integrated storage snapshots vault docs enterprise automated integrated storage snapshots sys storage raft snapshot auto vault api docs system storage raftautosnapshots Anti pattern issue If you do not enable disaster recovery and catastrophic failure occurs your use cases will encounter longer downtime duration and costs associated with not serving Vault clients in your environment Test disaster recovery Your disaster recovery DR solution is a key part of your overall disaster recovery plan Designing and configuring your Vault disaster recovery solution is only the first step You also need to validate the DR solution as not doing so can negatively impact your organization s Recovery Point Objective RPO and Recovery Time Objective RTO Recommended pattern Vault s Disaster Recovery DR replication mode provides a warm standby for failover if the primary cluster experiences catastrophic failure You should periodically test the disaster recovery replication cluster by completing the failover and failback procedure Vault disaster recovery replication failover and failback tutorial vault tutorials enterprise disaster recovery replication failover Vault Enterprise replication vault docs enterprise replication Monitoring Vault replication vault tutorials monitoring monitor replication You should establish standard operating procedures for restoring a Vault cluster from a snapshot The restoration methods following a DR situation would be in response to data corruption or sabotage which Disaster Recovery Replication might be unable to protect against Standard procedure for restoring a Vault cluster vault tutorials standard procedures sop restore Anti pattern issue If you don t test your disaster recovery solution your key stakeholders will not feel confident they can effectively perform the disaster recovery plan Testing the DR solution also helps your team to remove uncertainty around recovering the system during an outage Improve upgrade cadence While it might be easy to upgrade Vault whenever you have capacity not having a frequent upgrade cadence can impact your Vault performance and security Recommended pattern We recommend upgrading to our latest version of Vault Subscribe to the releases in Vault s GitHub repository https github com hashicorp vault and notifications from HashiCorp Vault discuss https discuss hashicorp com c release notifications 57 will inform you when we release a new Vault version Vault upgrade guides vault docs upgrading Vault feature deprecation notice and plans vault docs deprecation Anti pattern issue When you do not keep a regular upgrade cadence your Vault environment could be missing key features or improvements Missing patches for bugs or vulnerabilities as documented in the CHANGELOG https github com hashicorp vault blob main CHANGELOG md New features to improve workflow Must use version specific rather than the latest documentation Some educational resourcesrequire a specific minimum Vault version Updates may require a stepped approach that uses an intermediate version before installing the latest binary Test before upgrades We recommend testing Vault in a sandbox environment before deploying to production Although it might be faster to upgrade immediately in production testing will help identify any compatibility issues Be aware of the CHANGELOG https github com hashicorp vault blob main CHANGELOG md and account for any new features improvements known issues and bug fixes in your testing Recommended pattern Test new Vault versions in sandbox environments before upgrading in production and follow our upgrading documentation We recommend adding a testing phase to your standard upgrade procedure Vault upgrade standard procedure vault tutorials standard procedures sop upgrade Upgrading Vault vault docs upgrading Anti pattern issue Without adequate testing before upgrading in production you risk compatibility and performance issues Warning This could lead to downtime or degradation in your production Vault environment Warning Rotate audit device logs Audit devices in Vault maintain a detailed log of every client request and server response If you allow the logs for audit devices to run perpetually without rotating you may face a blocked audit device if the filesystem storage becomes exhausted Recommended pattern Inspect and rotate audit logs periodically Blocked audit devices tutorial vault tutorials monitoring blocked audit devices blocked audit devices vault docs audit blocked audit devices Anti pattern issue Vault will not respond to requests when audit devices are not enabled to record them The audit device can exhaust the local storage if the audit device log is not maintained and rotated over time Monitor metrics Relying solely on Vault operational logs and data in Vault UI will give you a partial picture of the cluster s performance Recommended pattern Continuous monitoring will allow organizations to detect minor problems and promptly resolve them Migrating from reactive to proactive monitoring will help to prevent system failures Vault has multiple outputs that help monitor the cluster s activity audit logs operational logs and telemetry data This data can work with a SIEM security information and event management tool for aggregation inspection and alerting capabilities Telemetry vault docs internals telemetry secrets engines metric Telemetry metrics reference vault tutorials monitoring telemetry metrics reference Adding a monitoring solution Audit device logs and incident response with elasticsearch vault tutorials monitoring audit elastic incident response Monitor telemetry audit device log data vault tutorials monitoring monitor telemetry audit splunk Monitor telemetry with Prometheus Grafana vault tutorials monitoring monitor telemetry grafana prometheus Note Vault logs to standard output and standard error by default automatically captured by the systemd journal You can also instruct Vault to redirect operational log writes to a file Note Anti pattern issue Having partial insight into cluster activity can leave the business in a reactive state Establish usage baseline A baseline provides insight into current utilization and thresholds Telemetry metrics are valuable especially when monitored over time You can use telemetry metrics to gather a baseline of cluster activity while alerts inform you of abnormal activity Recommended pattern Telemetry information can also be streamed directly from Vault to a range of metrics aggregation solutions and saved for aggregation and inspection Vault usage metrics vault tutorials monitoring usage metrics Diagnose server issues vault tutorials monitoring diagnose startup issues Anti pattern issue This issue closely relates to the recommended pattern for monitor metrics monitor metrics Telemetry data is only held in memory for a short period Minimize root token use Initializing a Vault server emits an initial root token that gives root level access across all Vault features Recommended pattern We recommend that you revoke the root token after initializing Vault within your environment If users require elevated access create access control list policies that grant proper capabilities on the necessary paths in Vault If your operations require the root token keep it for the shortest possible time before revoking it Generate root tokens tutorial vault tutorials operations generate root Root tokens vault docs concepts tokens root tokens Vault policies vault docs concepts policies Anti pattern issue A root token can perform all actions within Vault and never expire Unrestricted access can give users higher privileges than necessary to all Vault operations and paths Sharing and providing access to root tokens poses a security risk Rekey when necessary Vault distributes unsealed keys to stakeholders A quorum of keys is needed to unlock Vault based on your initialization settings Recommended pattern Vault supports rekeying and you should establish a workflow for rekeying when necessary Rekeying rotating Vault vault tutorials operations rekeying and rotating Operator rekey vault docs commands operator rekey Anti pattern issue If several stakeholders leave the organization you risk not having the required key shares to meet the unseal quorum which could result in the loss of the ability to unseal Vault |
vault Learn about the key Vault metrics you should monitor with health checks Key metrics for common health checks layout docs This document covers common Vault monitoring patterns It is important to have operational and usage insight into a running Vault cluster to understand performance assist with proactive incident response and understand business workloads and use cases page title Key metrics for common health checks | ---
layout: docs
page_title: Key metrics for common health checks
description: >-
Learn about the key Vault metrics you should monitor with health checks.
---
# Key metrics for common health checks
This document covers common Vault monitoring patterns. It is important to have operational and usage insight into a running Vault cluster to understand performance, assist with proactive incident response, and understand business workloads and use cases.
This document consists of five metrics sections: core, usage, storage backend, audit, and resource. Core metrics are fundamental internal metrics which you should monitor to ensure the health of your Vault cluster. The usage metrics section covers metrics which help count active and historical clients Vault. The storage backend section highlights the metrics to monitor so that you understand the storage infrastructure that your Vault cluster uses, allowing you to ensure your storage is functioning as intended. Audit metrics allow you to set up monitoring that helps you meet your compliance requirements. Resource metrics allow you to monitor metrics such as CPU, networking, and other resources Vault uses on its host. Replication covers metrics you can use to help ensure that Vault is replicating data as intended.
## Core metrics
### Servers assume the leader role
#### Metrics:
`Vault.core.leadership_lost`
`vault.core.leadership_setup_failed`
#### Background:

The diagram illustrates a highly available Vault cluster with five nodes distributed between three availability zones. Vault's Integrated Storage uses a consensus protocol to provide consistency across the cluster nodes. The leader (active) node is responsible for ingesting new log entries, replicating them to the follower (standby) nodes, and managing when to commit an entry. Integrated Storage uses consensus protocol to provide consistency; therefore, if the leader is lost, the voting nodes will elect a new leader. Refer to the [Integrated storage](/vault/docs/internals/integrated-storage) documentation for more details.
<Tip>
When you operate Vault with Integrated Storage, it automatically provides [additional metrics for Raft leadership changes](/vault/docs/internals/integrated-storage#consensus-protocol).
</Tip>
#### Alerting:
The metric `vault.core.leadership_lost` measures the duration a server held the leader position before losing it. A consistently low value for this metric suggests a high leadership turnover, indicating potential instability within the cluster.
On the other hand, spikes in the `vault.core.leadership_setup_failed` metric indicate failures that standby servers cannot successfully assume the leader role when required. Investigate these failures promptly, and check for any issues related to acquiring the leader election lock. These metrics are important alerts and can signify security and reliability risks. For example, there might be a communication problem between Vault and its storage backend or a broader outage causing multiple Vault servers to fail. Monitoring and analyzing these metrics can help identify and address any underlying issues, ensuring the stability and security of your Vault cluster.
### Higher latency in your application
#### Metrics:
`vault.core.handle_login_request`
`vault.core.handle_request`
#### Background:
Vault can use trusted sources like Kubernetes, Active Directory, and Okta to verify the identity of clients (users or services) before granting them access to secrets. Clients must authenticate themselves by making a login request through the `vault login` command or the API. When the authentication is successful, Vault provides the client with a token, which is stored locally on the client's machine and is used to authorize future requests. As long as the client presents a valid token that has not expired, Vault recognizes the client as authenticated.
#### Alerting:
The metric `vault.core.handle_login_request`, when averaged, measures how fast Vault responds to client login requests. If you notice a significant increase in this metric but no significant increase in the number of tokens issued (`vault.token.creation`) it's crucial to investigate the cause of this issue immediately.
When a client sends a request to Vault, it typically needs to go through an authentication process to verify its identity and obtain the necessary permissions. This authentication process involves validating the client's credentials, such as username and password or API token, and ensuring the client has the appropriate access rights.
If the authentication process in Vault is slow, it takes longer for Vault to verify the client's credentials and authorize the request. This delay in authentication directly impacts the response time of Vault to the client's request.
You should also monitor the `vault.core.handle_request` metric, which measures server workload. This metric helps determine whether you need to scale up your cluster to accommodate increased traffic. On the other hand, a sudden drop in throughput may indicate connectivity problems between Vault and its clients, which you should investigate further.
### Difficulties with setting up auditing or problems with mounting a custom plugin backend
#### Metrics:
`vault.core.post_unseal`
#### Background:
Vault servers can be in one of two states: sealed or unsealed. To ensure security, Vault does not trust the storage backends and stores data in an encrypted form. After Vault is started, it must undergo an unsealing process to obtain the plaintext root key necessary to read the decryption key to decrypt the data. After unsealing, Vault performs various post-unseal operations to set up the server correctly before it can start responding to requests.
#### Alerting:
If you notice sudden increases in the `vault.core.post_unseal` metric, issues might affect your server's availability during the post-unseal process, such as errors with auditing or mounting a custom plugin backend. To diagnose the issues, refer to your Vault server's logs.
## Usage metrics
### Excessive token creations affecting Vault performance
#### Metrics:
`vault.token.creation`
##### Background:
All authenticated Vault requests require a valid client token. Tokens are linked to policies determining which capabilities a client (user or system) has for a given path. Vault issues three types of tokens: service tokens, batch tokens, and recovery tokens.
Service tokens are what users will generally think of as "normal" Vault tokens. They support all features, such as renewal, revocation, creating child tokens, and more. They are correspondingly heavyweight to create and track.
Batch tokens are encrypted binary large objects (blobs) with just enough information about the client. While Vault does not persist the batch tokens, it persists the service tokens. The amount of space required to store the service token depends on the authentication method. Therefore, a large number of service tokens could contribute to an out-of-memory issue.
Recovery tokens are used exclusively for operating Vault in [recovery mode](/vault/docs/concepts/recovery-mode).
#### Alerting
By monitoring the number of tokens created (`vault.token.creation`) and the frequency of login requests (`vault.core.handle_login_request` counted as a total), you can gain insights into the overall workload of your system. If your scenario involves running numerous short-lived processes, such as serverless workloads, you may experience simultaneous creation and request of secrets from hundreds or thousands of functions. In such cases, you will observe correlated spikes in both metrics.
When dealing with transient workloads, you should utilize batch tokens to enhance the performance of your cluster. Vault creates a batch token, which encrypts all the client's information and provides it to the client. When the client employs this token, Vault decrypts the stored metadata and fulfills the request. Unlike service tokens, batch tokens do not retain client information or get replicated across clusters. This characteristic alleviates the burden on the storage backend and leads to improved cluster performance.
To learn more about batch tokens, refer to the [batch tokens](/vault/tutorials/tokens/batch-tokens) tutorial.
### Lease lifecycle introducing unexpected traffic spikes in Vault
#### Metrics:
`vault.expire.num_leases`
#### Background:
Vault creates a lease when it generates a dynamic secret or service token. This lease contains essential information like the secret or token’s time to live (TTL) value, and whether it can be extended or renewed. Vault stores the lease in the storage backend. If Vault doesn't renew the lease before reaching its TTL, it will expire and be invalidated, causing the associated secret or token to be revoked.
#### Alerting
Monitoring the number of active leases in your Vault server (`vault.expire.num_leases`) can provide valuable insights into the server's activity level. An increase in leases suggests a higher volume of traffic to your application. At the same time, a sudden decrease could indicate issues with accessing dynamic secrets quickly enough to serve incoming traffic.
We recommend setting the shortest possible TTL for leases to improve security and performance. There are two main reasons for this. Firstly, a shorter TTL reduces the impact of potential attacks. Secondly, it prevents leases from accumulating indefinitely and consuming excessive space in the storage backend. If you don't specify a TTL explicitly, leases default to 32 days. However, if there is a sudden surge in load and numerous leases are generated with this long default TTL, the storage backend can quickly reach its maximum capacity and crash, resulting in unavailability.
Depending on your specific use case, you may only require a token or secret for a few minutes or hours, rather than the full 32 days. By setting an appropriate TTL, you can free up storage space for storing new secrets and tokens. In the case of Vault Enterprise, you can set a [lease count quota](/vault/docs/enterprise/lease-count-quotas) to limit the number of leases generated below a certain threshold. When the threshold is reached, Vault will restrict the creation of new leases until an existing lease expires or is revoked. This helps manage the overall number of leases and prevents excessive resource usage.
Read the [Protecting Vault with resource quotas](/vault/tutorials/operations/resource-quotas) tutorial to learn how to set the lease count quotas.
Alternatively, you can leverage the [Vault agent caching](/vault/docs/agent-and-proxy/agent/caching) to delegate the lease lifecycle management to Vault Agent.
<Note>
The lifecycle of the leases are managed by the expiration manager, which handles the revocation of a lease when the time to live value associated with the lease is reached. Refer to the [Troubleshoot irrevocable leases](/vault/tutorials/monitoring/troubleshoot-irrevocable-leases) tutorial when you encounter irrevocable leases monitored by the `vault.expire.num_irrevocable_leases` metric.
</Note>
### Know the average time it takes to renew or revoke client tokens
#### Metrics:
`vault.expire.renew-token`
`vault.expire.revoke`
#### Background:
Vault automatically revokes access to secrets granted by a token when its time to live (TTL) expires. You can manually revoke a token if there are signs of a possible security breach. When a token is no longer valid (either expired or revoked), the client will lose access to the secrets managed by Vault. Therefore, the client must either renew the token before it expires, or request a new one.
#### Alerting
Monitoring the timely completion of revocation (`vault.expire.revoke`) and renewal (`vault.expire.renew-token`) operations is crucial for ensuring the validity and accessibility of secrets. Some long-running applications may require a token to be renewed instead of getting a new one. In such a case, the time it takes to renew a token can potentially affect the application from accessing secrets. Also, it is important to track the time it takes to complete the revoke operation helps to detect unauthorized access to secrets, as attackers who gain access can potentially infiltrate your system and cause harm. If you notice significant delays in the revocation process, you should investigate your server logs for any backend issues that might have hindered the revocation process.
## Storage backend metrics
### Detect any performance issues with your Vault's storage backend
#### Metrics:
`vault.<STORAGE>.get`
`vault.<STORAGE>.put`
`vault.<STORAGE>.list`
`vault.<STORAGE>.delete`
#### Background:
The performance of the storage backend affects the overall performance of the Vault; therefore, it is critical to monitor the performance of your storage backend so that you can detect and react to any anomaly. Backend monitoring allows you to ensure that your storage infrastructure is functioning optimally. Tracking performance lets you gather detailed information and insights about the backend's operations and identify areas requiring improvement or optimization.
#### Alerting
If Vault takes longer to access the backend for operations like retrieving (`vault.<STORAGE>.get`), storing (`vault.<STORAGE>.put`), listing (`vault.<STORAGE>.list`), or deleting (`vault.<STORAGE>.delete`) items, the Vault clients may be experiencing delays caused by storage limitations. To address this issue, you can set up alerts that will notify your team automatically when Vault's access to the storage backend slows down. This will allow you to take action, such as upgrading to disks with better input/output (I/O) performance, before the increased latency negatively impacts your application users' experience.
If you are using Integrated Storage, the following resources provide additional guidance:
- [Inspect Data in Integrated Storage](/vault/tutorials/monitoring/inspect-data-integrated-storage)
- [Inspect Data in BoltDB](/vault/tutorials/monitoring/inspect-data-boltdb)
## Audit metrics
### Blocked audit devices
#### Metrics:
`vault.audit.log_request_failure`
`vault.audit.log_response_failure`
#### Background:
Audit devices play a crucial role in meeting compliance requirements by recording a comprehensive audit log of requests and responses from Vault. For a production deployment, your Vault cluster should have at least one audit device enabled so that you can trace all incoming requests and outgoing responses associated with your cluster. If you rely on only one audit device and encounter problems (e.g., network connection loss or permission issues), Vault can become unresponsive and cease to handle requests. Enabling at least one additional audit device is essential to ensure uninterrupted functionality and responsiveness from Vault.
#### Alerting
To ensure smooth operation, monitoring any unusual increases in audit log request failures (`vault.audit.log_request_failure`) and response failures (`vault.audit.log_response_failure`) is important. These failures could indicate a device blockage. If such issues arise, examining the audit logs can help identify the problematic device and provide additional clues about the underlying problem.
If Vault is unable to write audit logs to the syslog, the server will generate error logs similar to the following example:
```plaintext
2020-10-20T12:34:56.290Z [ERROR] audit: backend failed to log response: backend=syslog/ error="write unixgram @->/test/log: write: message too long"
2020-10-20T12:34:56.291Z [ERROR] core: failed to audit response: request_path=sys/mounts
error="1 error occurred:
* no audit backend succeeded in logging the response
```
You should expect to encounter a pair of errors from the audit and core modules for each failed log response. If you receive an error message containing "write: message too long," it suggests that the entries that Vault is trying to write to the syslog audit device exceed the size of the syslog host's socket send buffer. In such cases, it's necessary to investigate what is causing the generation of large log entries, such as an extensive list of Active Directory or LDAP groups.
Refer to the [Blocked audit devices](/vault/tutorials/monitoring/blocked-audit-devices) tutorial for additional guidance.
## Resource metrics
### Vault memory issues indicated by garbage collection
#### Metrics:
`vault.runtime.sys_bytes`
`vault.runtime.gc_pause_ns`
#### Background:
Garbage collection in the Go runtime temporarily pauses all operations. These pauses are usually brief, but garbage collection happens more often if memory usage is high. This increased frequency of garbage collection can cause delays in Vault's performance.
#### Alerting
Analyzing the relationship between Vault's memory usage (represented as a percentage of total available memory on the host) and garbage collection pause time (measured by `vault.runtime.gc_pause_ns`) can provide valuable insights into resource limitations and assist in effectively allocating compute resources.
To illustrate, when the `vault.runtime.sys_bytes` exceeds 90 percent of the available memory on the host, it is advisable to add more memory to prevent performance degradation. Additionally, you should set up an alert that triggers if the GC pause time exceeds 5 seconds per minute. This alert will promptly notify you, enabling swift action to address the issue.
### CPU I/O wait time
#### Background:
Vault scales horizontally by adding more instances or nodes, but there are still practical limits to scalability. Excessive CPU wait time for I/O operations can indicate that the system is reaching its scalability limits or overusing specific resources. By tracking these metrics, administrators can assess the system's scalability and take appropriate actions, such as optimizing I/O operations or adding additional resources, to maintain performance as the system grows.
#### Alerting
We recommend keeping the I/O wait time below 10 percent to ensure optimal performance. If you notice excessively long wait times, it indicates that clients are experiencing delays while waiting for Vault to respond to their requests. This delay can negatively impact the performance of applications that rely on Vault. In such situations, evaluating if your resources are properly configured to handle your workload and if the requests are evenly distributed across all CPUs is necessary. These steps will help address potential performance issues and ensure the smooth operation of Vault and its dependent applications.
### Keep your network throughput within the threshold
#### Background:
Monitoring the network throughput of your Vault clusters allows you to gauge their workload. A sudden decrease in traffic going in or out might indicate communication problems between Vault and its clients or dependencies. Conversely, if you observe an unexpected surge in network activity, it could be a sign of a denial of service (DoS) attack. Knowing these network patterns can provide valuable insights and help you identify potential issues or security threats.
#### Alerting
Starting from Vault 1.5, you can set rate limit quotas to ensure Vault's overall stability and health. When a server reaches this threshold, Vault will reject any new client requests and respond with an HTTP 429 error, specifically "Too Many Requests." These rejected requests will be recorded in your audit logs and display a message like this example: "error: request path kv/app/test: rate limit quota exceeded." Choosing an appropriate limit for the rate quota is important so that it doesn't block legitimate requests and cause slowdowns in your applications. To monitor the frequency of these breaches and adjust your limit accordingly, you can keep an eye on the metric called quota.rate_limit.violation, which increments with each violation of the rate limit quota.
Refer to the [Protecting Vault with resource quotas](/vault/tutorials/operations/resource-quotas) tutorial to learn how to set the rate limit quotas for your Vault.
## Replication metrics
### Free memory in the storage backend by monitoring Write-Ahead logs
#### Metrics:
`vault.wal_flushready`
`vault.wal.persistWALs`
#### Background:
To maintain high performance, Vault utilizes a garbage collector that periodically removes old Write-Ahead Logs (WALs) to free up memory on the storage backend. However, when there are unexpected surges in traffic, the accumulation of WALs can occur rapidly, leading to increased strain on the storage backend. These surges can negatively affect other processes in Vault that rely on the same storage backend. Therefore, it is important to assess the impact of replication on the performance of your storage backend. By doing so, you can better understand how the replication process influences your system's overall performance.
#### Alerting
We recommend you keep track of two metrics: `vault.wal_flushready` and `vault.wal.persistWALs`. The first metric measures the time it takes to flush a ready Write-Ahead Log (WAL) to the persist queue, while the second metric measures the time it takes to persist a WAL to the storage backend.
To ensure efficient performance, we advise you to set up alerts that will notify you when the `vault.wal_flushready` metric exceeds 500 milliseconds or when the `vault.wal.persistWALs` metric surpasses 1,000 milliseconds. These alerts serve as indicators that backpressure is slowing down your storage backend.
If either of these alerts is triggered, consider scaling your storage backend to accommodate the increased workload. Scaling can help alleviate the strain and maintain optimal performance.
### Vault Enterprise Replication health check
#### Metrics:
`vault.replication.wal.last_wal`
#### Background:
Vault's Write-Ahead Log (WAL) is a durable data storage and recovery mechanism. WAL is a log file that records all changes made to the Vault data store before Vault persists them to the underlying storage backend. The WAL provides an extra layer of reliability and ensures data integrity in case of system failures or crashes.
#### Alerting
When you have Vault Enterprise deployments with Performance Replication and/or Disaster Recovery Replication configured, you want to monitor that the data gets replicated from the primary to the secondary clusters. To detect if your primary and secondary clusters are losing synchronization, you can compare the last Write-Ahead Log (WAL) index on both clusters. It's important to detect discrepancies between them because if the secondary clusters are significantly behind the primary and the primary cluster becomes unavailable, any requests made to Vault will yield outdated data. Therefore, if you notice missing values in the WAL, it's essential to investigate potential causes, which may include:
Network issues between the primary and secondary clusters: Problems with the network connection can hinder the proper replication of data between the clusters.
Resource limitations on the primary or secondary systems: If the primary or secondary clusters are experiencing resource constraints, it can affect their ability to replicate data effectively.
Issues with specific keys: Sometimes, the problem may relate to specific keys within the Vault. To identify such issues, examine the Vault's operational and storage logs, which will provide detailed information about the problematic keys causing the synchronization gaps.
Refer to the [Monitoring Vault replication](/vault/tutorials/monitoring/monitor-replication) tutorial to learn more.
## Additional references
- [Monitor telemetry with Prometheus & Grafana](/vault/tutorials/monitoring/monitor-telemetry-grafana-prometheus)
- [Monitor telemetry & Audit Device Log Data](/vault/tutorials/monitoring/monitor-telemetry-audit-splunk)
- [Vault usage metrics](/vault/tutorials/monitoring/usage-metrics) | vault | layout docs page title Key metrics for common health checks description Learn about the key Vault metrics you should monitor with health checks Key metrics for common health checks This document covers common Vault monitoring patterns It is important to have operational and usage insight into a running Vault cluster to understand performance assist with proactive incident response and understand business workloads and use cases This document consists of five metrics sections core usage storage backend audit and resource Core metrics are fundamental internal metrics which you should monitor to ensure the health of your Vault cluster The usage metrics section covers metrics which help count active and historical clients Vault The storage backend section highlights the metrics to monitor so that you understand the storage infrastructure that your Vault cluster uses allowing you to ensure your storage is functioning as intended Audit metrics allow you to set up monitoring that helps you meet your compliance requirements Resource metrics allow you to monitor metrics such as CPU networking and other resources Vault uses on its host Replication covers metrics you can use to help ensure that Vault is replicating data as intended Core metrics Servers assume the leader role Metrics Vault core leadership lost vault core leadership setup failed Background Recommended architecture img vault integrated storage reference architecture svg The diagram illustrates a highly available Vault cluster with five nodes distributed between three availability zones Vault s Integrated Storage uses a consensus protocol to provide consistency across the cluster nodes The leader active node is responsible for ingesting new log entries replicating them to the follower standby nodes and managing when to commit an entry Integrated Storage uses consensus protocol to provide consistency therefore if the leader is lost the voting nodes will elect a new leader Refer to the Integrated storage vault docs internals integrated storage documentation for more details Tip When you operate Vault with Integrated Storage it automatically provides additional metrics for Raft leadership changes vault docs internals integrated storage consensus protocol Tip Alerting The metric vault core leadership lost measures the duration a server held the leader position before losing it A consistently low value for this metric suggests a high leadership turnover indicating potential instability within the cluster On the other hand spikes in the vault core leadership setup failed metric indicate failures that standby servers cannot successfully assume the leader role when required Investigate these failures promptly and check for any issues related to acquiring the leader election lock These metrics are important alerts and can signify security and reliability risks For example there might be a communication problem between Vault and its storage backend or a broader outage causing multiple Vault servers to fail Monitoring and analyzing these metrics can help identify and address any underlying issues ensuring the stability and security of your Vault cluster Higher latency in your application Metrics vault core handle login request vault core handle request Background Vault can use trusted sources like Kubernetes Active Directory and Okta to verify the identity of clients users or services before granting them access to secrets Clients must authenticate themselves by making a login request through the vault login command or the API When the authentication is successful Vault provides the client with a token which is stored locally on the client s machine and is used to authorize future requests As long as the client presents a valid token that has not expired Vault recognizes the client as authenticated Alerting The metric vault core handle login request when averaged measures how fast Vault responds to client login requests If you notice a significant increase in this metric but no significant increase in the number of tokens issued vault token creation it s crucial to investigate the cause of this issue immediately When a client sends a request to Vault it typically needs to go through an authentication process to verify its identity and obtain the necessary permissions This authentication process involves validating the client s credentials such as username and password or API token and ensuring the client has the appropriate access rights If the authentication process in Vault is slow it takes longer for Vault to verify the client s credentials and authorize the request This delay in authentication directly impacts the response time of Vault to the client s request You should also monitor the vault core handle request metric which measures server workload This metric helps determine whether you need to scale up your cluster to accommodate increased traffic On the other hand a sudden drop in throughput may indicate connectivity problems between Vault and its clients which you should investigate further Difficulties with setting up auditing or problems with mounting a custom plugin backend Metrics vault core post unseal Background Vault servers can be in one of two states sealed or unsealed To ensure security Vault does not trust the storage backends and stores data in an encrypted form After Vault is started it must undergo an unsealing process to obtain the plaintext root key necessary to read the decryption key to decrypt the data After unsealing Vault performs various post unseal operations to set up the server correctly before it can start responding to requests Alerting If you notice sudden increases in the vault core post unseal metric issues might affect your server s availability during the post unseal process such as errors with auditing or mounting a custom plugin backend To diagnose the issues refer to your Vault server s logs Usage metrics Excessive token creations affecting Vault performance Metrics vault token creation Background All authenticated Vault requests require a valid client token Tokens are linked to policies determining which capabilities a client user or system has for a given path Vault issues three types of tokens service tokens batch tokens and recovery tokens Service tokens are what users will generally think of as normal Vault tokens They support all features such as renewal revocation creating child tokens and more They are correspondingly heavyweight to create and track Batch tokens are encrypted binary large objects blobs with just enough information about the client While Vault does not persist the batch tokens it persists the service tokens The amount of space required to store the service token depends on the authentication method Therefore a large number of service tokens could contribute to an out of memory issue Recovery tokens are used exclusively for operating Vault in recovery mode vault docs concepts recovery mode Alerting By monitoring the number of tokens created vault token creation and the frequency of login requests vault core handle login request counted as a total you can gain insights into the overall workload of your system If your scenario involves running numerous short lived processes such as serverless workloads you may experience simultaneous creation and request of secrets from hundreds or thousands of functions In such cases you will observe correlated spikes in both metrics When dealing with transient workloads you should utilize batch tokens to enhance the performance of your cluster Vault creates a batch token which encrypts all the client s information and provides it to the client When the client employs this token Vault decrypts the stored metadata and fulfills the request Unlike service tokens batch tokens do not retain client information or get replicated across clusters This characteristic alleviates the burden on the storage backend and leads to improved cluster performance To learn more about batch tokens refer to the batch tokens vault tutorials tokens batch tokens tutorial Lease lifecycle introducing unexpected traffic spikes in Vault Metrics vault expire num leases Background Vault creates a lease when it generates a dynamic secret or service token This lease contains essential information like the secret or token s time to live TTL value and whether it can be extended or renewed Vault stores the lease in the storage backend If Vault doesn t renew the lease before reaching its TTL it will expire and be invalidated causing the associated secret or token to be revoked Alerting Monitoring the number of active leases in your Vault server vault expire num leases can provide valuable insights into the server s activity level An increase in leases suggests a higher volume of traffic to your application At the same time a sudden decrease could indicate issues with accessing dynamic secrets quickly enough to serve incoming traffic We recommend setting the shortest possible TTL for leases to improve security and performance There are two main reasons for this Firstly a shorter TTL reduces the impact of potential attacks Secondly it prevents leases from accumulating indefinitely and consuming excessive space in the storage backend If you don t specify a TTL explicitly leases default to 32 days However if there is a sudden surge in load and numerous leases are generated with this long default TTL the storage backend can quickly reach its maximum capacity and crash resulting in unavailability Depending on your specific use case you may only require a token or secret for a few minutes or hours rather than the full 32 days By setting an appropriate TTL you can free up storage space for storing new secrets and tokens In the case of Vault Enterprise you can set a lease count quota vault docs enterprise lease count quotas to limit the number of leases generated below a certain threshold When the threshold is reached Vault will restrict the creation of new leases until an existing lease expires or is revoked This helps manage the overall number of leases and prevents excessive resource usage Read the Protecting Vault with resource quotas vault tutorials operations resource quotas tutorial to learn how to set the lease count quotas Alternatively you can leverage the Vault agent caching vault docs agent and proxy agent caching to delegate the lease lifecycle management to Vault Agent Note The lifecycle of the leases are managed by the expiration manager which handles the revocation of a lease when the time to live value associated with the lease is reached Refer to the Troubleshoot irrevocable leases vault tutorials monitoring troubleshoot irrevocable leases tutorial when you encounter irrevocable leases monitored by the vault expire num irrevocable leases metric Note Know the average time it takes to renew or revoke client tokens Metrics vault expire renew token vault expire revoke Background Vault automatically revokes access to secrets granted by a token when its time to live TTL expires You can manually revoke a token if there are signs of a possible security breach When a token is no longer valid either expired or revoked the client will lose access to the secrets managed by Vault Therefore the client must either renew the token before it expires or request a new one Alerting Monitoring the timely completion of revocation vault expire revoke and renewal vault expire renew token operations is crucial for ensuring the validity and accessibility of secrets Some long running applications may require a token to be renewed instead of getting a new one In such a case the time it takes to renew a token can potentially affect the application from accessing secrets Also it is important to track the time it takes to complete the revoke operation helps to detect unauthorized access to secrets as attackers who gain access can potentially infiltrate your system and cause harm If you notice significant delays in the revocation process you should investigate your server logs for any backend issues that might have hindered the revocation process Storage backend metrics Detect any performance issues with your Vault s storage backend Metrics vault STORAGE get vault STORAGE put vault STORAGE list vault STORAGE delete Background The performance of the storage backend affects the overall performance of the Vault therefore it is critical to monitor the performance of your storage backend so that you can detect and react to any anomaly Backend monitoring allows you to ensure that your storage infrastructure is functioning optimally Tracking performance lets you gather detailed information and insights about the backend s operations and identify areas requiring improvement or optimization Alerting If Vault takes longer to access the backend for operations like retrieving vault STORAGE get storing vault STORAGE put listing vault STORAGE list or deleting vault STORAGE delete items the Vault clients may be experiencing delays caused by storage limitations To address this issue you can set up alerts that will notify your team automatically when Vault s access to the storage backend slows down This will allow you to take action such as upgrading to disks with better input output I O performance before the increased latency negatively impacts your application users experience If you are using Integrated Storage the following resources provide additional guidance Inspect Data in Integrated Storage vault tutorials monitoring inspect data integrated storage Inspect Data in BoltDB vault tutorials monitoring inspect data boltdb Audit metrics Blocked audit devices Metrics vault audit log request failure vault audit log response failure Background Audit devices play a crucial role in meeting compliance requirements by recording a comprehensive audit log of requests and responses from Vault For a production deployment your Vault cluster should have at least one audit device enabled so that you can trace all incoming requests and outgoing responses associated with your cluster If you rely on only one audit device and encounter problems e g network connection loss or permission issues Vault can become unresponsive and cease to handle requests Enabling at least one additional audit device is essential to ensure uninterrupted functionality and responsiveness from Vault Alerting To ensure smooth operation monitoring any unusual increases in audit log request failures vault audit log request failure and response failures vault audit log response failure is important These failures could indicate a device blockage If such issues arise examining the audit logs can help identify the problematic device and provide additional clues about the underlying problem If Vault is unable to write audit logs to the syslog the server will generate error logs similar to the following example plaintext 2020 10 20T12 34 56 290Z ERROR audit backend failed to log response backend syslog error write unixgram test log write message too long 2020 10 20T12 34 56 291Z ERROR core failed to audit response request path sys mounts error 1 error occurred no audit backend succeeded in logging the response You should expect to encounter a pair of errors from the audit and core modules for each failed log response If you receive an error message containing write message too long it suggests that the entries that Vault is trying to write to the syslog audit device exceed the size of the syslog host s socket send buffer In such cases it s necessary to investigate what is causing the generation of large log entries such as an extensive list of Active Directory or LDAP groups Refer to the Blocked audit devices vault tutorials monitoring blocked audit devices tutorial for additional guidance Resource metrics Vault memory issues indicated by garbage collection Metrics vault runtime sys bytes vault runtime gc pause ns Background Garbage collection in the Go runtime temporarily pauses all operations These pauses are usually brief but garbage collection happens more often if memory usage is high This increased frequency of garbage collection can cause delays in Vault s performance Alerting Analyzing the relationship between Vault s memory usage represented as a percentage of total available memory on the host and garbage collection pause time measured by vault runtime gc pause ns can provide valuable insights into resource limitations and assist in effectively allocating compute resources To illustrate when the vault runtime sys bytes exceeds 90 percent of the available memory on the host it is advisable to add more memory to prevent performance degradation Additionally you should set up an alert that triggers if the GC pause time exceeds 5 seconds per minute This alert will promptly notify you enabling swift action to address the issue CPU I O wait time Background Vault scales horizontally by adding more instances or nodes but there are still practical limits to scalability Excessive CPU wait time for I O operations can indicate that the system is reaching its scalability limits or overusing specific resources By tracking these metrics administrators can assess the system s scalability and take appropriate actions such as optimizing I O operations or adding additional resources to maintain performance as the system grows Alerting We recommend keeping the I O wait time below 10 percent to ensure optimal performance If you notice excessively long wait times it indicates that clients are experiencing delays while waiting for Vault to respond to their requests This delay can negatively impact the performance of applications that rely on Vault In such situations evaluating if your resources are properly configured to handle your workload and if the requests are evenly distributed across all CPUs is necessary These steps will help address potential performance issues and ensure the smooth operation of Vault and its dependent applications Keep your network throughput within the threshold Background Monitoring the network throughput of your Vault clusters allows you to gauge their workload A sudden decrease in traffic going in or out might indicate communication problems between Vault and its clients or dependencies Conversely if you observe an unexpected surge in network activity it could be a sign of a denial of service DoS attack Knowing these network patterns can provide valuable insights and help you identify potential issues or security threats Alerting Starting from Vault 1 5 you can set rate limit quotas to ensure Vault s overall stability and health When a server reaches this threshold Vault will reject any new client requests and respond with an HTTP 429 error specifically Too Many Requests These rejected requests will be recorded in your audit logs and display a message like this example error request path kv app test rate limit quota exceeded Choosing an appropriate limit for the rate quota is important so that it doesn t block legitimate requests and cause slowdowns in your applications To monitor the frequency of these breaches and adjust your limit accordingly you can keep an eye on the metric called quota rate limit violation which increments with each violation of the rate limit quota Refer to the Protecting Vault with resource quotas vault tutorials operations resource quotas tutorial to learn how to set the rate limit quotas for your Vault Replication metrics Free memory in the storage backend by monitoring Write Ahead logs Metrics vault wal flushready vault wal persistWALs Background To maintain high performance Vault utilizes a garbage collector that periodically removes old Write Ahead Logs WALs to free up memory on the storage backend However when there are unexpected surges in traffic the accumulation of WALs can occur rapidly leading to increased strain on the storage backend These surges can negatively affect other processes in Vault that rely on the same storage backend Therefore it is important to assess the impact of replication on the performance of your storage backend By doing so you can better understand how the replication process influences your system s overall performance Alerting We recommend you keep track of two metrics vault wal flushready and vault wal persistWALs The first metric measures the time it takes to flush a ready Write Ahead Log WAL to the persist queue while the second metric measures the time it takes to persist a WAL to the storage backend To ensure efficient performance we advise you to set up alerts that will notify you when the vault wal flushready metric exceeds 500 milliseconds or when the vault wal persistWALs metric surpasses 1 000 milliseconds These alerts serve as indicators that backpressure is slowing down your storage backend If either of these alerts is triggered consider scaling your storage backend to accommodate the increased workload Scaling can help alleviate the strain and maintain optimal performance Vault Enterprise Replication health check Metrics vault replication wal last wal Background Vault s Write Ahead Log WAL is a durable data storage and recovery mechanism WAL is a log file that records all changes made to the Vault data store before Vault persists them to the underlying storage backend The WAL provides an extra layer of reliability and ensures data integrity in case of system failures or crashes Alerting When you have Vault Enterprise deployments with Performance Replication and or Disaster Recovery Replication configured you want to monitor that the data gets replicated from the primary to the secondary clusters To detect if your primary and secondary clusters are losing synchronization you can compare the last Write Ahead Log WAL index on both clusters It s important to detect discrepancies between them because if the secondary clusters are significantly behind the primary and the primary cluster becomes unavailable any requests made to Vault will yield outdated data Therefore if you notice missing values in the WAL it s essential to investigate potential causes which may include Network issues between the primary and secondary clusters Problems with the network connection can hinder the proper replication of data between the clusters Resource limitations on the primary or secondary systems If the primary or secondary clusters are experiencing resource constraints it can affect their ability to replicate data effectively Issues with specific keys Sometimes the problem may relate to specific keys within the Vault To identify such issues examine the Vault s operational and storage logs which will provide detailed information about the problematic keys causing the synchronization gaps Refer to the Monitoring Vault replication vault tutorials monitoring monitor replication tutorial to learn more Additional references Monitor telemetry with Prometheus Grafana vault tutorials monitoring monitor telemetry grafana prometheus Monitor telemetry Audit Device Log Data vault tutorials monitoring monitor telemetry audit splunk Vault usage metrics vault tutorials monitoring usage metrics |
vault page title Enable Vault telemetry Collect telemetry data from your Vault installation layout docs Step by step guide to enabling telemetry gathering with Vault Enable Vault telemetry gathering | ---
layout: docs
page_title: Enable Vault telemetry
description: >-
Step-by-step guide to enabling telemetry gathering with Vault
---
# Enable Vault telemetry gathering
Collect telemetry data from your Vault installation.
## Before you start
- **You must have Vault 1.14 or later installed and running**.
- **You must have access to your [Vault configuration](/vault/docs/configuration) file**.
## Step 1: Choose an aggregation agent
@include 'telemetry/supported-aggregation-agents.mdx'
## Step 2: Enable at least one audit device
To include audit-related metrics, you must enable auditing on at least one device
with the `vault audit enable` command. For example, to enable auditing for the
`file` device and save the logs to `/var/log/vault_audit.log`:
```shell-session
$ vault audit enable file file_path=/var/log/vault_audit.log
```
By default, Enterprise installations replicate audit devices to the secondary
performance nodes in a cluster. To limit performance replication for an audit
device, use the `local` flag to mark the device as local to the current node:
```shell-session
$ vault audit enable file -local file_path=/var/log/vault_audit.log
```
## Step 3: Configure telemetry collection
To configure telemetry collection, update the telemetry stanza in your Vault
configuration with your collection preferences and aggregation agent details.
For example, the following `telemetry` stanza configures Vault with the standard
telemetry defaults and connects it to a Statsite agent running on the default
port within a company intranet at `mycompany.statsite`:
```hcl
telemetry {
usage_gauge_period = "10m"
maximum_gauge_cardinality = 500
disable_hostname = false
enable_hostname_label = false
lease_metrics_epsilon = "1h"
num_lease_metrics_buckets = 168
add_lease_metrics_namespace_labels = false
filter_default = true
statsite_address = "mycompany.statsite:8125"
}
```
Many metrics solutions charge by the metric. You can set `filter_default` to
false and use the `prefix_filter` parameter to include and exclude specific
values based on metric name to avoid paying for irrelevant information.
For example, to limit your telemetry to the core token metrics plus the number
of leases set to expire:
```hcl
telemetry {
filter_default = false
prefix_filter = ["+vault.token", "-vault.expire", "+vault.expire.num_leases"]
}
```
## Step 4: Choose a reporting solution
You need to save or forward your telemetry data to a separate storage solution
for reporting, analysis, and alerting. Which solution you need depends on the
feature set provided by your aggregation agent and the protocol support of your
reporting platform.
Popular reporting solutions compatible with Vault:
- [Grafana](https://grafana.com/grafana)
- [Graphite](https://www.hostedgraphite.com)
- [InfluxData: Telegraf](https://www.influxdata.com/time-series-platform/telegraf)
- [InfluxData: InfluxDB](https://www.influxdata.com/products/influxdb-overview)
- [InfluxData: Chronograf](https://www.influxdata.com/time-series-platform/telegraf)
- [InfluxData: Kapacitor](https://www.influxdata.com/time-series-platform/kapacitor)
- [Splunk](https://www.splunk.com)
## Next steps
- Review the
[Key metrics for common health checks](/well-architected-framework/reliability/reliability-vault-monitoring-key-metrics)
guide to identify metrics you may want to start monitoring immediately.
- Review the full list of available
[telemetry parameters](/vault/docs/configuration/telemetry#telemetry-parameters).
- Review the [Monitor telemetry and audit device log data](/vault/tutorials/monitoring/monitor-telemetry-audit-splunk)
tutorial for general monitoring guidance and steps to configure your
Vault telemetry for Splunk using Telegraf and Fluentd.
- Review the
[Monitor telemetry with Prometheus and Grafana](/vault/tutorials/monitoring/monitor-telemetry-grafana-prometheus)
tutorial to configure your Vault telemetry for Prometheus and Grafana. | vault | layout docs page title Enable Vault telemetry description Step by step guide to enabling telemetry gathering with Vault Enable Vault telemetry gathering Collect telemetry data from your Vault installation Before you start You must have Vault 1 14 or later installed and running You must have access to your Vault configuration vault docs configuration file Step 1 Choose an aggregation agent include telemetry supported aggregation agents mdx Step 2 Enable at least one audit device To include audit related metrics you must enable auditing on at least one device with the vault audit enable command For example to enable auditing for the file device and save the logs to var log vault audit log shell session vault audit enable file file path var log vault audit log By default Enterprise installations replicate audit devices to the secondary performance nodes in a cluster To limit performance replication for an audit device use the local flag to mark the device as local to the current node shell session vault audit enable file local file path var log vault audit log Step 3 Configure telemetry collection To configure telemetry collection update the telemetry stanza in your Vault configuration with your collection preferences and aggregation agent details For example the following telemetry stanza configures Vault with the standard telemetry defaults and connects it to a Statsite agent running on the default port within a company intranet at mycompany statsite hcl telemetry usage gauge period 10m maximum gauge cardinality 500 disable hostname false enable hostname label false lease metrics epsilon 1h num lease metrics buckets 168 add lease metrics namespace labels false filter default true statsite address mycompany statsite 8125 Many metrics solutions charge by the metric You can set filter default to false and use the prefix filter parameter to include and exclude specific values based on metric name to avoid paying for irrelevant information For example to limit your telemetry to the core token metrics plus the number of leases set to expire hcl telemetry filter default false prefix filter vault token vault expire vault expire num leases Step 4 Choose a reporting solution You need to save or forward your telemetry data to a separate storage solution for reporting analysis and alerting Which solution you need depends on the feature set provided by your aggregation agent and the protocol support of your reporting platform Popular reporting solutions compatible with Vault Grafana https grafana com grafana Graphite https www hostedgraphite com InfluxData Telegraf https www influxdata com time series platform telegraf InfluxData InfluxDB https www influxdata com products influxdb overview InfluxData Chronograf https www influxdata com time series platform telegraf InfluxData Kapacitor https www influxdata com time series platform kapacitor Splunk https www splunk com Next steps Review the Key metrics for common health checks well architected framework reliability reliability vault monitoring key metrics guide to identify metrics you may want to start monitoring immediately Review the full list of available telemetry parameters vault docs configuration telemetry telemetry parameters Review the Monitor telemetry and audit device log data vault tutorials monitoring monitor telemetry audit splunk tutorial for general monitoring guidance and steps to configure your Vault telemetry for Splunk using Telegraf and Fluentd Review the Monitor telemetry with Prometheus and Grafana vault tutorials monitoring monitor telemetry grafana prometheus tutorial to configure your Vault telemetry for Prometheus and Grafana |
vault Database telemetry page title Telemetry reference Database metrics Database telemetry provides general information about configured secrets engines layout docs Technical reference for database telemetry values | ---
layout: docs
page_title: "Telemetry reference: Database metrics"
description: >-
Technical reference for database telemetry values.
---
# Database telemetry
Database telemetry provides general information about configured secrets engines
and databases.
## Secrets database metrics
@include 'telemetry-metrics/secretsdb-intro.mdx'
@include 'telemetry-metrics/database/close.mdx'
@include 'telemetry-metrics/database/close/error.mdx'
@include 'telemetry-metrics/database/createuser.mdx'
@include 'telemetry-metrics/database/createuser/error.mdx'
@include 'telemetry-metrics/database/initialize.mdx'
@include 'telemetry-metrics/database/initialize/error.mdx'
@include 'telemetry-metrics/database/name/close.mdx'
@include 'telemetry-metrics/database/name/close/error.mdx'
@include 'telemetry-metrics/database/name/createuser.mdx'
@include 'telemetry-metrics/database/name/createuser/error.mdx'
@include 'telemetry-metrics/database/name/initialize.mdx'
@include 'telemetry-metrics/database/name/initialize/error.mdx'
@include 'telemetry-metrics/database/name/renewuser.mdx'
@include 'telemetry-metrics/database/name/renewuser/error.mdx'
@include 'telemetry-metrics/database/name/revokeuser.mdx'
@include 'telemetry-metrics/database/name/revokeuser/error.mdx'
@include 'telemetry-metrics/database/renewuser.mdx'
@include 'telemetry-metrics/database/renewuser/error.mdx'
@include 'telemetry-metrics/database/revokeuser.mdx'
@include 'telemetry-metrics/database/revokeuser/error.mdx'
## Cockroach database
Metrics related to your Cockroach database **storage backend**.
@include 'telemetry-metrics/vault/cockroachdb/delete.mdx'
@include 'telemetry-metrics/vault/cockroachdb/get.mdx'
@include 'telemetry-metrics/vault/cockroachdb/list.mdx'
@include 'telemetry-metrics/vault/cockroachdb/put.mdx'
## Couch database
Metrics related to your Couch database **storage backend**.
@include 'telemetry-metrics/vault/couchdb/delete.mdx'
@include 'telemetry-metrics/vault/couchdb/get.mdx'
@include 'telemetry-metrics/vault/couchdb/list.mdx'
@include 'telemetry-metrics/vault/couchdb/put.mdx'
## Dynamo database
Metrics related to your Dynamo database **storage backend**.
@include 'telemetry-metrics/vault/dynamodb/delete.mdx'
@include 'telemetry-metrics/vault/dynamodb/get.mdx'
@include 'telemetry-metrics/vault/dynamodb/list.mdx'
@include 'telemetry-metrics/vault/dynamodb/put.mdx'
## Google Cloud - Spanner
Metrics related to your Spanner **storage backend**.
@include 'telemetry-metrics/vault/spanner/delete.mdx'
@include 'telemetry-metrics/vault/spanner/get.mdx'
@include 'telemetry-metrics/vault/spanner/list.mdx'
@include 'telemetry-metrics/vault/spanner/lock/lock.mdx'
@include 'telemetry-metrics/vault/spanner/lock/unlock.mdx'
@include 'telemetry-metrics/vault/spanner/lock/value.mdx'
@include 'telemetry-metrics/vault/spanner/put.mdx'
## Microsoft SQL Server (MSSQL)
Metrics related to your SQL Server **storage backend**.
@include 'telemetry-metrics/vault/mssql/delete.mdx'
@include 'telemetry-metrics/vault/mssql/get.mdx'
@include 'telemetry-metrics/vault/mssql/list.mdx'
@include 'telemetry-metrics/vault/mssql/put.mdx'
## MySQL
Metrics related to your MySQL **storage backend**.
@include 'telemetry-metrics/vault/mysql/delete.mdx'
@include 'telemetry-metrics/vault/mysql/get.mdx'
@include 'telemetry-metrics/vault/mysql/list.mdx'
@include 'telemetry-metrics/vault/mysql/put.mdx'
## PostgreSQL
Metrics related to your PostgreSQL **storage backend**.
@include 'telemetry-metrics/vault/postgres/delete.mdx'
@include 'telemetry-metrics/vault/postgres/get.mdx'
@include 'telemetry-metrics/vault/postgres/list.mdx'
@include 'telemetry-metrics/vault/postgres/put.mdx | vault | layout docs page title Telemetry reference Database metrics description Technical reference for database telemetry values Database telemetry Database telemetry provides general information about configured secrets engines and databases Secrets database metrics include telemetry metrics secretsdb intro mdx include telemetry metrics database close mdx include telemetry metrics database close error mdx include telemetry metrics database createuser mdx include telemetry metrics database createuser error mdx include telemetry metrics database initialize mdx include telemetry metrics database initialize error mdx include telemetry metrics database name close mdx include telemetry metrics database name close error mdx include telemetry metrics database name createuser mdx include telemetry metrics database name createuser error mdx include telemetry metrics database name initialize mdx include telemetry metrics database name initialize error mdx include telemetry metrics database name renewuser mdx include telemetry metrics database name renewuser error mdx include telemetry metrics database name revokeuser mdx include telemetry metrics database name revokeuser error mdx include telemetry metrics database renewuser mdx include telemetry metrics database renewuser error mdx include telemetry metrics database revokeuser mdx include telemetry metrics database revokeuser error mdx Cockroach database Metrics related to your Cockroach database storage backend include telemetry metrics vault cockroachdb delete mdx include telemetry metrics vault cockroachdb get mdx include telemetry metrics vault cockroachdb list mdx include telemetry metrics vault cockroachdb put mdx Couch database Metrics related to your Couch database storage backend include telemetry metrics vault couchdb delete mdx include telemetry metrics vault couchdb get mdx include telemetry metrics vault couchdb list mdx include telemetry metrics vault couchdb put mdx Dynamo database Metrics related to your Dynamo database storage backend include telemetry metrics vault dynamodb delete mdx include telemetry metrics vault dynamodb get mdx include telemetry metrics vault dynamodb list mdx include telemetry metrics vault dynamodb put mdx Google Cloud Spanner Metrics related to your Spanner storage backend include telemetry metrics vault spanner delete mdx include telemetry metrics vault spanner get mdx include telemetry metrics vault spanner list mdx include telemetry metrics vault spanner lock lock mdx include telemetry metrics vault spanner lock unlock mdx include telemetry metrics vault spanner lock value mdx include telemetry metrics vault spanner put mdx Microsoft SQL Server MSSQL Metrics related to your SQL Server storage backend include telemetry metrics vault mssql delete mdx include telemetry metrics vault mssql get mdx include telemetry metrics vault mssql list mdx include telemetry metrics vault mssql put mdx MySQL Metrics related to your MySQL storage backend include telemetry metrics vault mysql delete mdx include telemetry metrics vault mysql get mdx include telemetry metrics vault mysql list mdx include telemetry metrics vault mysql put mdx PostgreSQL Metrics related to your PostgreSQL storage backend include telemetry metrics vault postgres delete mdx include telemetry metrics vault postgres get mdx include telemetry metrics vault postgres list mdx include telemetry metrics vault postgres put mdx |
vault page title Telemetry reference All metrics Full list of all telemetry values provided by Vault For completeness we provide a full list of available metrics below in All Vault telemetry metrics layout docs | ---
layout: docs
page_title: "Telemetry reference: All metrics"
description: >-
Full list of all telemetry values provided by Vault.
---
# All Vault telemetry metrics
For completeness, we provide a full list of available metrics below in
alphabetic order by name.
## Full metric list
@include 'telemetry-metrics/database/close.mdx'
@include 'telemetry-metrics/database/close/error.mdx'
@include 'telemetry-metrics/database/createuser.mdx'
@include 'telemetry-metrics/database/createuser/error.mdx'
@include 'telemetry-metrics/database/initialize.mdx'
@include 'telemetry-metrics/database/initialize/error.mdx'
@include 'telemetry-metrics/database/name/close.mdx'
@include 'telemetry-metrics/database/name/close/error.mdx'
@include 'telemetry-metrics/database/name/createuser.mdx'
@include 'telemetry-metrics/database/name/createuser/error.mdx'
@include 'telemetry-metrics/database/name/initialize.mdx'
@include 'telemetry-metrics/database/name/initialize/error.mdx'
@include 'telemetry-metrics/database/name/renewuser.mdx'
@include 'telemetry-metrics/database/name/renewuser/error.mdx'
@include 'telemetry-metrics/database/name/revokeuser.mdx'
@include 'telemetry-metrics/database/name/revokeuser/error.mdx'
@include 'telemetry-metrics/database/renewuser.mdx'
@include 'telemetry-metrics/database/renewuser/error.mdx'
@include 'telemetry-metrics/database/revokeuser.mdx'
@include 'telemetry-metrics/database/revokeuser/error.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/cert_store_current_entry.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/cert_store_deleted_count.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/cert_store_total_entries_remaining.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/cert_store_total_entries.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/duration.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/failure.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/revoked_cert_current_entry.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/revoked_cert_deleted_count.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/revoked_cert_total_entries_fixed_issuers.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/revoked_cert_total_entries_incorrect_issuers.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/revoked_cert_total_entries_remaining.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/revoked_cert_total_entries.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/start_time_epoch.mdx'
@include 'telemetry-metrics/secrets/pki/tidy/success.mdx'
@include 'telemetry-metrics/vault/audit/device/log_request.mdx'
@include 'telemetry-metrics/vault/audit/device/log_response.mdx'
@include 'telemetry-metrics/vault/audit/log_request_failure.mdx'
@include 'telemetry-metrics/vault/audit/log_request.mdx'
@include 'telemetry-metrics/vault/audit/log_response_failure.mdx'
@include 'telemetry-metrics/vault/audit/log_response.mdx'
@include 'telemetry-metrics/vault/audit/sink_success.mdx'
@include 'telemetry-metrics/vault/audit/sink_failure.mdx'
@include 'telemetry-metrics/vault/audit/fallback_success.mdx'
@include 'telemetry-metrics/vault/audit/fallback_miss.mdx'
@include 'telemetry-metrics/vault/autopilot/failure_tolerance.mdx'
@include 'telemetry-metrics/vault/autopilot/healthy.mdx'
@include 'telemetry-metrics/vault/autopilot/node/healthy.mdx'
@include 'telemetry-metrics/vault/autosnapshots/last/success/time.mdx'
@include 'telemetry-metrics/vault/autosnapshots/percent/maxspace/used.mdx'
@include 'telemetry-metrics/vault/autosnapshots/rotate/duration.mdx'
@include 'telemetry-metrics/vault/autosnapshots/save/duration.mdx'
@include 'telemetry-metrics/vault/autosnapshots/save/errors.mdx'
@include 'telemetry-metrics/vault/autosnapshots/snapshot/size.mdx'
@include 'telemetry-metrics/vault/autosnapshots/total/snapshot/size.mdx'
@include 'telemetry-metrics/vault/azure/delete.mdx'
@include 'telemetry-metrics/vault/azure/get.mdx'
@include 'telemetry-metrics/vault/azure/list.mdx'
@include 'telemetry-metrics/vault/azure/put.mdx'
@include 'telemetry-metrics/vault/barrier/delete.mdx'
@include 'telemetry-metrics/vault/barrier/estimated_encryptions.mdx'
@include 'telemetry-metrics/vault/barrier/get.mdx'
@include 'telemetry-metrics/vault/barrier/list.mdx'
@include 'telemetry-metrics/vault/barrier/put.mdx'
@include 'telemetry-metrics/vault/cache/delete.mdx'
@include 'telemetry-metrics/vault/cache/hit.mdx'
@include 'telemetry-metrics/vault/cache/miss.mdx'
@include 'telemetry-metrics/vault/cache/write.mdx'
@include 'telemetry-metrics/vault/cassandra/delete.mdx'
@include 'telemetry-metrics/vault/cassandra/get.mdx'
@include 'telemetry-metrics/vault/cassandra/list.mdx'
@include 'telemetry-metrics/vault/cassandra/put.mdx'
@include 'telemetry-metrics/vault/cockroachdb/delete.mdx'
@include 'telemetry-metrics/vault/cockroachdb/get.mdx'
@include 'telemetry-metrics/vault/cockroachdb/list.mdx'
@include 'telemetry-metrics/vault/cockroachdb/put.mdx'
@include 'telemetry-metrics/vault/consul/delete.mdx'
@include 'telemetry-metrics/vault/consul/get.mdx'
@include 'telemetry-metrics/vault/consul/list.mdx'
@include 'telemetry-metrics/vault/consul/put.mdx'
@include 'telemetry-metrics/vault/consul/transaction.mdx'
@include 'telemetry-metrics/vault/core/active.mdx'
@include 'telemetry-metrics/vault/core/activity/fragment_size.mdx'
@include 'telemetry-metrics/vault/core/activity/segment_write.mdx'
@include 'telemetry-metrics/vault/core/check_token.mdx'
@include 'telemetry-metrics/vault/core/fetch_acl_and_token.mdx'
@include 'telemetry-metrics/vault/core/handle_login_request.mdx'
@include 'telemetry-metrics/vault/core/handle_request.mdx'
@include 'telemetry-metrics/vault/core/in_flight_requests.mdx'
@include 'telemetry-metrics/vault/core/leadership_lost.mdx'
@include 'telemetry-metrics/vault/core/leadership_setup_failed.mdx'
@include 'telemetry-metrics/vault/core/license/expiration_time_epoch.mdx'
@include 'telemetry-metrics/vault/core/locked_users.mdx'
@include 'telemetry-metrics/vault/core/mount_table/num_entries.mdx'
@include 'telemetry-metrics/vault/core/mount_table/size.mdx'
@include 'telemetry-metrics/vault/core/performance_standby.mdx'
@include 'telemetry-metrics/vault/core/post_unseal.mdx'
@include 'telemetry-metrics/vault/core/pre_seal.mdx'
@include 'telemetry-metrics/vault/core/replication/dr/primary.mdx'
@include 'telemetry-metrics/vault/core/replication/dr/secondary.mdx'
@include 'telemetry-metrics/vault/core/replication/performance/primary.mdx'
@include 'telemetry-metrics/vault/core/replication/performance/secondary.mdx'
@include 'telemetry-metrics/vault/core/replication/write_undo_logs.mdx'
@include 'telemetry-metrics/vault/core/replication/build_progress.mdx'
@include 'telemetry-metrics/vault/core/replication/build_total.mdx'
@include 'telemetry-metrics/vault/core/replication/reindex_stage.mdx'
@include 'telemetry-metrics/vault/core/seal_internal.mdx'
@include 'telemetry-metrics/vault/core/seal_with_request.mdx'
@include 'telemetry-metrics/vault/core/step_down.mdx'
@include 'telemetry-metrics/vault/core/unseal.mdx'
@include 'telemetry-metrics/vault/core/unsealed.mdx'
@include 'telemetry-metrics/vault/couchdb/delete.mdx'
@include 'telemetry-metrics/vault/couchdb/get.mdx'
@include 'telemetry-metrics/vault/couchdb/list.mdx'
@include 'telemetry-metrics/vault/couchdb/put.mdx'
@include 'telemetry-metrics/vault/dynamodb/delete.mdx'
@include 'telemetry-metrics/vault/dynamodb/get.mdx'
@include 'telemetry-metrics/vault/dynamodb/list.mdx'
@include 'telemetry-metrics/vault/dynamodb/put.mdx'
@include 'telemetry-metrics/vault/etcd/delete.mdx'
@include 'telemetry-metrics/vault/etcd/get.mdx'
@include 'telemetry-metrics/vault/etcd/list.mdx'
@include 'telemetry-metrics/vault/etcd/put.mdx'
@include 'telemetry-metrics/vault/expire/fetch_lease_times_by_token.mdx'
@include 'telemetry-metrics/vault/expire/fetch_lease_times.mdx'
@include 'telemetry-metrics/vault/expire/job_manager/queue_length.mdx'
@include 'telemetry-metrics/vault/expire/job_manager/total_jobs.mdx'
@include 'telemetry-metrics/vault/expire/lease_expiration.mdx'
@include 'telemetry-metrics/vault/expire/lease_expiration/error.mdx'
@include 'telemetry-metrics/vault/expire/lease_expiration/time_in_queue.mdx'
@include 'telemetry-metrics/vault/expire/leases/by_expiration.mdx'
@include 'telemetry-metrics/vault/expire/num_irrevocable_leases.mdx'
@include 'telemetry-metrics/vault/expire/num_leases.mdx'
@include 'telemetry-metrics/vault/expire/register_auth.mdx'
@include 'telemetry-metrics/vault/expire/register.mdx'
@include 'telemetry-metrics/vault/expire/renew_token.mdx'
@include 'telemetry-metrics/vault/expire/renew.mdx'
@include 'telemetry-metrics/vault/expire/revoke_by_token.mdx'
@include 'telemetry-metrics/vault/expire/revoke_force.mdx'
@include 'telemetry-metrics/vault/expire/revoke_prefix.mdx'
@include 'telemetry-metrics/vault/expire/revoke.mdx'
@include 'telemetry-metrics/vault/gcs/delete.mdx'
@include 'telemetry-metrics/vault/gcs/get.mdx'
@include 'telemetry-metrics/vault/gcs/list.mdx'
@include 'telemetry-metrics/vault/gcs/lock/lock.mdx'
@include 'telemetry-metrics/vault/gcs/lock/unlock.mdx'
@include 'telemetry-metrics/vault/gcs/lock/value.mdx'
@include 'telemetry-metrics/vault/gcs/put.mdx'
@include 'telemetry-metrics/vault/ha/rpc/client/echo.mdx'
@include 'telemetry-metrics/vault/ha/rpc/client/echo/errors.mdx'
@include 'telemetry-metrics/vault/ha/rpc/client/forward.mdx'
@include 'telemetry-metrics/vault/ha/rpc/client/forward/errors.mdx'
@include 'telemetry-metrics/vault/identity/entity/active/monthly.mdx'
@include 'telemetry-metrics/vault/identity/entity/active/partial_month.mdx'
@include 'telemetry-metrics/vault/identity/entity/active/reporting_period.mdx'
@include 'telemetry-metrics/vault/identity/entity/alias/count.mdx'
@include 'telemetry-metrics/vault/identity/entity/count.mdx'
@include 'telemetry-metrics/vault/identity/entity/creation.mdx'
@include 'telemetry-metrics/vault/identity/num_entities.mdx'
@include 'telemetry-metrics/vault/identity/pki_acme/monthly.mdx'
@include 'telemetry-metrics/vault/identity/pki_acme/reporting_period.mdx'
@include 'telemetry-metrics/vault/identity/secret_sync/monthly.mdx'
@include 'telemetry-metrics/vault/identity/secret_sync/reporting_period.mdx'
@include 'telemetry-metrics/vault/identity/upsert_entity_txn.mdx'
@include 'telemetry-metrics/vault/identity/upsert_group_txn.mdx'
@include 'telemetry-metrics/vault/logshipper/buffer/length.mdx'
@include 'telemetry-metrics/vault/logshipper/buffer/max_length.mdx'
@include 'telemetry-metrics/vault/logshipper/buffer/max_size.mdx'
@include 'telemetry-metrics/vault/logshipper/buffer/size.mdx'
@include 'telemetry-metrics/vault/logshipper/streamwals/guard_found.mdx'
@include 'telemetry-metrics/vault/logshipper/streamwals/missing_guard.mdx'
@include 'telemetry-metrics/vault/logshipper/streamwals/scanned_entries.mdx'
@include 'telemetry-metrics/vault/merkle/flushdirty.mdx'
@include 'telemetry-metrics/vault/merkle/flushdirty/num_pages.mdx'
@include 'telemetry-metrics/vault/merkle/flushdirty/outstanding_pages.mdx'
@include 'telemetry-metrics/vault/merkle/savecheckpoint.mdx'
@include 'telemetry-metrics/vault/merkle/savecheckpoint/num_dirty.mdx'
@include 'telemetry-metrics/vault/metrics/collection.mdx'
@include 'telemetry-metrics/vault/metrics/collection/error.mdx'
@include 'telemetry-metrics/vault/metrics/collection/interval.mdx'
@include 'telemetry-metrics/vault/mssql/delete.mdx'
@include 'telemetry-metrics/vault/mssql/get.mdx'
@include 'telemetry-metrics/vault/mssql/list.mdx'
@include 'telemetry-metrics/vault/mssql/put.mdx'
@include 'telemetry-metrics/vault/mysql/delete.mdx'
@include 'telemetry-metrics/vault/mysql/get.mdx'
@include 'telemetry-metrics/vault/mysql/list.mdx'
@include 'telemetry-metrics/vault/mysql/put.mdx'
@include 'telemetry-metrics/vault/policy/delete_policy.mdx'
@include 'telemetry-metrics/vault/policy/get_policy.mdx'
@include 'telemetry-metrics/vault/policy/list_policies.mdx'
@include 'telemetry-metrics/vault/policy/set_policy.mdx'
@include 'telemetry-metrics/vault/postgres/delete.mdx'
@include 'telemetry-metrics/vault/postgres/get.mdx'
@include 'telemetry-metrics/vault/postgres/list.mdx'
@include 'telemetry-metrics/vault/postgres/put.mdx'
@include 'telemetry-metrics/vault/quota/lease_count/counter.mdx'
@include 'telemetry-metrics/vault/quota/lease_count/max.mdx'
@include 'telemetry-metrics/vault/quota/lease_count/violation.mdx'
@include 'telemetry-metrics/vault/quota/rate_limit/violation.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/cursor/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/freelist/allocated_bytes.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/freelist/free_pages.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/freelist/pending_pages.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/freelist/used_bytes.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/node/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/node/dereferences.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/page/bytes_allocated.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/page/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/rebalance/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/rebalance/time.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/spill/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/spill/time.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/split/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/transaction/currently_open_read_transactions.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/transaction/started_read_transactions.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/write/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/write/time.mdx'
@include 'telemetry-metrics/vault/raft_storage/follower/applied_index_delta.mdx'
@include 'telemetry-metrics/vault/raft_storage/follower/last_heartbeat_ms.mdx'
@include 'telemetry-metrics/vault/raft_storage/stats/applied_index.mdx'
@include 'telemetry-metrics/vault/raft_storage/stats/commit_index.mdx'
@include 'telemetry-metrics/vault/raft_storage/stats/fsm_pending.mdx'
@include 'telemetry-metrics/vault/raft-storage/delete.mdx'
@include 'telemetry-metrics/vault/raft-storage/entry_size.mdx'
@include 'telemetry-metrics/vault/raft-storage/get.mdx'
@include 'telemetry-metrics/vault/raft-storage/list.mdx'
@include 'telemetry-metrics/vault/raft-storage/put.mdx'
@include 'telemetry-metrics/vault/raft-storage/transaction.mdx'
@include 'telemetry-metrics/vault/raft-wal/head-truncations.mdx'
@include 'telemetry-metrics/vault/raft-wal/tail-truncations.mdx'
@include 'telemetry-metrics/vault/raft-wal/log-entries-read.mdx'
@include 'telemetry-metrics/vault/raft-wal/log-entries-written.mdx'
@include 'telemetry-metrics/vault/raft-wal/log-entry-bytes-read.mdx'
@include 'telemetry-metrics/vault/raft-wal/log-entry-bytes-written.mdx'
@include 'telemetry-metrics/vault/raft-wal/stable-gets.mdx'
@include 'telemetry-metrics/vault/raft-wal/stable-sets.mdx'
@include 'telemetry-metrics/vault/raft-wal/log-appends.mdx'
@include 'telemetry-metrics/vault/raft-wal/segment-rotations.mdx'
@include 'telemetry-metrics/vault/raft-wal/last-segment-age-seconds.mdx'
@include 'telemetry-metrics/vault/raft/apply.mdx'
@include 'telemetry-metrics/vault/raft/barrier.mdx'
@include 'telemetry-metrics/vault/raft/candidate/electself.mdx'
@include 'telemetry-metrics/vault/raft/commitnumlogs.mdx'
@include 'telemetry-metrics/vault/raft/committime.mdx'
@include 'telemetry-metrics/vault/raft/compactlogs.mdx'
@include 'telemetry-metrics/vault/raft/fsm/apply.mdx'
@include 'telemetry-metrics/vault/raft/fsm/applybatch.mdx'
@include 'telemetry-metrics/vault/raft/fsm/applybatchnum.mdx'
@include 'telemetry-metrics/vault/raft/fsm/enqueue.mdx'
@include 'telemetry-metrics/vault/raft/fsm/restore.mdx'
@include 'telemetry-metrics/vault/raft/fsm/snapshot.mdx'
@include 'telemetry-metrics/vault/raft/fsm/store_config.mdx'
@include 'telemetry-metrics/vault/raft/get.mdx'
@include 'telemetry-metrics/vault/raft/leader/dispatchlog.mdx'
@include 'telemetry-metrics/vault/raft/leader/dispatchnumlogs.mdx'
@include 'telemetry-metrics/vault/raft/leader/lastcontact.mdx'
@include 'telemetry-metrics/vault/raft/list.mdx'
@include 'telemetry-metrics/vault/raft/peers.mdx'
@include 'telemetry-metrics/vault/raft/replication/appendentries/log.mdx'
@include 'telemetry-metrics/vault/raft/replication/appendentries/rpc.mdx'
@include 'telemetry-metrics/vault/raft/replication/heartbeat.mdx'
@include 'telemetry-metrics/vault/raft/replication/installsnapshot.mdx'
@include 'telemetry-metrics/vault/raft/restore.mdx'
@include 'telemetry-metrics/vault/raft/restoreusersnapshot.mdx'
@include 'telemetry-metrics/vault/raft/rpc/appendentries.mdx'
@include 'telemetry-metrics/vault/raft/rpc/appendentries/processlogs.mdx'
@include 'telemetry-metrics/vault/raft/rpc/appendentries/storelogs.mdx'
@include 'telemetry-metrics/vault/raft/rpc/installsnapshot.mdx'
@include 'telemetry-metrics/vault/raft/rpc/processheartbeat.mdx'
@include 'telemetry-metrics/vault/raft/rpc/requestvote.mdx'
@include 'telemetry-metrics/vault/raft/snapshot/create.mdx'
@include 'telemetry-metrics/vault/raft/snapshot/persist.mdx'
@include 'telemetry-metrics/vault/raft/snapshot/takesnapshot.mdx'
@include 'telemetry-metrics/vault/raft/state/candidate.mdx'
@include 'telemetry-metrics/vault/raft/state/follower.mdx'
@include 'telemetry-metrics/vault/raft/state/leader.mdx'
@include 'telemetry-metrics/vault/raft/transition/heartbeat_timeout.mdx'
@include 'telemetry-metrics/vault/raft/transition/leader_lease_timeout.mdx'
@include 'telemetry-metrics/vault/raft/verify_leader.mdx'
@include 'telemetry-metrics/vault/replication/fetchremotekeys.mdx'
@include 'telemetry-metrics/vault/replication/fsm/last_remote_wal.mdx'
@include 'telemetry-metrics/vault/replication/fsm/last_upstream_remote_wal.mdx'
@include 'telemetry-metrics/vault/replication/merkle/commit_index.mdx'
@include 'telemetry-metrics/vault/replication/merklediff.mdx'
@include 'telemetry-metrics/vault/replication/merklesync.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/conflicting_pages.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/create_token_register_auth_lease.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/fetch_keys.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/forward.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/guard_hash.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/persist_alias.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/register_auth.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/register_lease.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/save_mfa_response_auth.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/stream_wals.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/sub_page_hashes.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/sync_counter.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/upsert_group.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/wrap_in_cubbyhole.mdx'
@include 'telemetry-metrics/vault/replication/rpc/dr/server/echo.mdx'
@include 'telemetry-metrics/vault/replication/rpc/dr/server/fetch_keys_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/auth_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/bootstrap_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/conflicting_pages_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/echo.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/last_heartbeat.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/forwarding_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/guard_hash_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/persist_alias_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/persist_persona_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/save_mfa_response_auth.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/stream_wals_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/sub_page_hashes_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/sync_counter_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/upsert_group_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/standby/server/create_token_register_auth_lease_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/standby/server/echo.mdx'
@include 'telemetry-metrics/vault/replication/rpc/standby/server/register_auth_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/standby/server/register_lease_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/standby/server/wrap_token_request.mdx'
@include 'telemetry-metrics/vault/replication/wal/gc.mdx'
@include 'telemetry-metrics/vault/replication/wal/last_dr_wal.mdx'
@include 'telemetry-metrics/vault/replication/wal/last_performance_wal.mdx'
@include 'telemetry-metrics/vault/replication/wal/last_wal.mdx'
@include 'telemetry-metrics/vault/rollback/attempt/mountpoint.mdx'
@include 'telemetry-metrics/vault/rollback/attempt.mdx'
@include 'telemetry-metrics/vault/rollback/inflight.mdx'
@include 'telemetry-metrics/vault/rollback/queued.mdx'
@include 'telemetry-metrics/vault/rollback/waiting.mdx'
@include 'telemetry-metrics/vault/route/create/mountpoint.mdx'
@include 'telemetry-metrics/vault/route/delete/mountpoint.mdx'
@include 'telemetry-metrics/vault/route/list/mountpoint.mdx'
@include 'telemetry-metrics/vault/route/read/mountpoint.mdx'
@include 'telemetry-metrics/vault/route/rollback/mountpoint.mdx'
@include 'telemetry-metrics/vault/route/rollback.mdx'
@include 'telemetry-metrics/vault/runtime/alloc_bytes.mdx'
@include 'telemetry-metrics/vault/runtime/free_count.mdx'
@include 'telemetry-metrics/vault/runtime/gc_pause_ns.mdx'
@include 'telemetry-metrics/vault/runtime/heap_objects.mdx'
@include 'telemetry-metrics/vault/runtime/malloc_count.mdx'
@include 'telemetry-metrics/vault/runtime/num_goroutines.mdx'
@include 'telemetry-metrics/vault/runtime/sys_bytes.mdx'
@include 'telemetry-metrics/vault/runtime/total_gc_pause_ns.mdx'
@include 'telemetry-metrics/vault/runtime/total_gc_runs.mdx'
@include 'telemetry-metrics/vault/s3/delete.mdx'
@include 'telemetry-metrics/vault/s3/get.mdx'
@include 'telemetry-metrics/vault/s3/list.mdx'
@include 'telemetry-metrics/vault/s3/put.mdx'
@include 'telemetry-metrics/vault/secret/kv/count.mdx'
@include 'telemetry-metrics/vault/secret/lease/creation.mdx'
@include 'telemetry-metrics/vault/secrets-sync/destinations.mdx'
@include 'telemetry-metrics/vault/secrets-sync/associations.mdx'
@include 'telemetry-metrics/vault/spanner/delete.mdx'
@include 'telemetry-metrics/vault/spanner/get.mdx'
@include 'telemetry-metrics/vault/spanner/list.mdx'
@include 'telemetry-metrics/vault/spanner/lock/lock.mdx'
@include 'telemetry-metrics/vault/spanner/lock/unlock.mdx'
@include 'telemetry-metrics/vault/spanner/lock/value.mdx'
@include 'telemetry-metrics/vault/spanner/put.mdx'
@include 'telemetry-metrics/vault/swift/delete.mdx'
@include 'telemetry-metrics/vault/swift/get.mdx'
@include 'telemetry-metrics/vault/swift/list.mdx'
@include 'telemetry-metrics/vault/swift/put.mdx'
@include 'telemetry-metrics/vault/token/count.mdx'
@include 'telemetry-metrics/vault/token/count/by_auth.mdx'
@include 'telemetry-metrics/vault/token/count/by_policy.mdx'
@include 'telemetry-metrics/vault/token/count/by_ttl.mdx'
@include 'telemetry-metrics/vault/token/create_root.mdx'
@include 'telemetry-metrics/vault/token/create.mdx'
@include 'telemetry-metrics/vault/token/createaccessor.mdx'
@include 'telemetry-metrics/vault/token/creation.mdx'
@include 'telemetry-metrics/vault/token/lookup.mdx'
@include 'telemetry-metrics/vault/token/revoke_tree.mdx'
@include 'telemetry-metrics/vault/token/revoke.mdx'
@include 'telemetry-metrics/vault/token/store.mdx'
@include 'telemetry-metrics/vault/wal/deletewals.mdx'
@include 'telemetry-metrics/vault/wal/flushready.mdx'
@include 'telemetry-metrics/vault/wal/flushready/queue_len.mdx'
@include 'telemetry-metrics/vault/wal/gc/deleted.mdx'
@include 'telemetry-metrics/vault/wal/gc/total.mdx'
@include 'telemetry-metrics/vault/wal/loadwal.mdx'
@include 'telemetry-metrics/vault/wal/persistwals.mdx'
@include 'telemetry-metrics/vault/wal/write_controller/d.mdx'
@include 'telemetry-metrics/vault/wal/write_controller/i.mdx'
@include 'telemetry-metrics/vault/wal/write_controller/p.mdx'
@include 'telemetry-metrics/vault/wal/write_controller/reject_fraction.mdx'
@include 'telemetry-metrics/vault/zookeeper/delete.mdx'
@include 'telemetry-metrics/vault/zookeeper/get.mdx'
@include 'telemetry-metrics/vault/zookeeper/list.mdx'
@include 'telemetry-metrics/vault/zookeeper/put.mdx' | vault | layout docs page title Telemetry reference All metrics description Full list of all telemetry values provided by Vault All Vault telemetry metrics For completeness we provide a full list of available metrics below in alphabetic order by name Full metric list include telemetry metrics database close mdx include telemetry metrics database close error mdx include telemetry metrics database createuser mdx include telemetry metrics database createuser error mdx include telemetry metrics database initialize mdx include telemetry metrics database initialize error mdx include telemetry metrics database name close mdx include telemetry metrics database name close error mdx include telemetry metrics database name createuser mdx include telemetry metrics database name createuser error mdx include telemetry metrics database name initialize mdx include telemetry metrics database name initialize error mdx include telemetry metrics database name renewuser mdx include telemetry metrics database name renewuser error mdx include telemetry metrics database name revokeuser mdx include telemetry metrics database name revokeuser error mdx include telemetry metrics database renewuser mdx include telemetry metrics database renewuser error mdx include telemetry metrics database revokeuser mdx include telemetry metrics database revokeuser error mdx include telemetry metrics secrets pki tidy cert store current entry mdx include telemetry metrics secrets pki tidy cert store deleted count mdx include telemetry metrics secrets pki tidy cert store total entries remaining mdx include telemetry metrics secrets pki tidy cert store total entries mdx include telemetry metrics secrets pki tidy duration mdx include telemetry metrics secrets pki tidy failure mdx include telemetry metrics secrets pki tidy revoked cert current entry mdx include telemetry metrics secrets pki tidy revoked cert deleted count mdx include telemetry metrics secrets pki tidy revoked cert total entries fixed issuers mdx include telemetry metrics secrets pki tidy revoked cert total entries incorrect issuers mdx include telemetry metrics secrets pki tidy revoked cert total entries remaining mdx include telemetry metrics secrets pki tidy revoked cert total entries mdx include telemetry metrics secrets pki tidy start time epoch mdx include telemetry metrics secrets pki tidy success mdx include telemetry metrics vault audit device log request mdx include telemetry metrics vault audit device log response mdx include telemetry metrics vault audit log request failure mdx include telemetry metrics vault audit log request mdx include telemetry metrics vault audit log response failure mdx include telemetry metrics vault audit log response mdx include telemetry metrics vault audit sink success mdx include telemetry metrics vault audit sink failure mdx include telemetry metrics vault audit fallback success mdx include telemetry metrics vault audit fallback miss mdx include telemetry metrics vault autopilot failure tolerance mdx include telemetry metrics vault autopilot healthy mdx include telemetry metrics vault autopilot node healthy mdx include telemetry metrics vault autosnapshots last success time mdx include telemetry metrics vault autosnapshots percent maxspace used mdx include telemetry metrics vault autosnapshots rotate duration mdx include telemetry metrics vault autosnapshots save duration mdx include telemetry metrics vault autosnapshots save errors mdx include telemetry metrics vault autosnapshots snapshot size mdx include telemetry metrics vault autosnapshots total snapshot size mdx include telemetry metrics vault azure delete mdx include telemetry metrics vault azure get mdx include telemetry metrics vault azure list mdx include telemetry metrics vault azure put mdx include telemetry metrics vault barrier delete mdx include telemetry metrics vault barrier estimated encryptions mdx include telemetry metrics vault barrier get mdx include telemetry metrics vault barrier list mdx include telemetry metrics vault barrier put mdx include telemetry metrics vault cache delete mdx include telemetry metrics vault cache hit mdx include telemetry metrics vault cache miss mdx include telemetry metrics vault cache write mdx include telemetry metrics vault cassandra delete mdx include telemetry metrics vault cassandra get mdx include telemetry metrics vault cassandra list mdx include telemetry metrics vault cassandra put mdx include telemetry metrics vault cockroachdb delete mdx include telemetry metrics vault cockroachdb get mdx include telemetry metrics vault cockroachdb list mdx include telemetry metrics vault cockroachdb put mdx include telemetry metrics vault consul delete mdx include telemetry metrics vault consul get mdx include telemetry metrics vault consul list mdx include telemetry metrics vault consul put mdx include telemetry metrics vault consul transaction mdx include telemetry metrics vault core active mdx include telemetry metrics vault core activity fragment size mdx include telemetry metrics vault core activity segment write mdx include telemetry metrics vault core check token mdx include telemetry metrics vault core fetch acl and token mdx include telemetry metrics vault core handle login request mdx include telemetry metrics vault core handle request mdx include telemetry metrics vault core in flight requests mdx include telemetry metrics vault core leadership lost mdx include telemetry metrics vault core leadership setup failed mdx include telemetry metrics vault core license expiration time epoch mdx include telemetry metrics vault core locked users mdx include telemetry metrics vault core mount table num entries mdx include telemetry metrics vault core mount table size mdx include telemetry metrics vault core performance standby mdx include telemetry metrics vault core post unseal mdx include telemetry metrics vault core pre seal mdx include telemetry metrics vault core replication dr primary mdx include telemetry metrics vault core replication dr secondary mdx include telemetry metrics vault core replication performance primary mdx include telemetry metrics vault core replication performance secondary mdx include telemetry metrics vault core replication write undo logs mdx include telemetry metrics vault core replication build progress mdx include telemetry metrics vault core replication build total mdx include telemetry metrics vault core replication reindex stage mdx include telemetry metrics vault core seal internal mdx include telemetry metrics vault core seal with request mdx include telemetry metrics vault core step down mdx include telemetry metrics vault core unseal mdx include telemetry metrics vault core unsealed mdx include telemetry metrics vault couchdb delete mdx include telemetry metrics vault couchdb get mdx include telemetry metrics vault couchdb list mdx include telemetry metrics vault couchdb put mdx include telemetry metrics vault dynamodb delete mdx include telemetry metrics vault dynamodb get mdx include telemetry metrics vault dynamodb list mdx include telemetry metrics vault dynamodb put mdx include telemetry metrics vault etcd delete mdx include telemetry metrics vault etcd get mdx include telemetry metrics vault etcd list mdx include telemetry metrics vault etcd put mdx include telemetry metrics vault expire fetch lease times by token mdx include telemetry metrics vault expire fetch lease times mdx include telemetry metrics vault expire job manager queue length mdx include telemetry metrics vault expire job manager total jobs mdx include telemetry metrics vault expire lease expiration mdx include telemetry metrics vault expire lease expiration error mdx include telemetry metrics vault expire lease expiration time in queue mdx include telemetry metrics vault expire leases by expiration mdx include telemetry metrics vault expire num irrevocable leases mdx include telemetry metrics vault expire num leases mdx include telemetry metrics vault expire register auth mdx include telemetry metrics vault expire register mdx include telemetry metrics vault expire renew token mdx include telemetry metrics vault expire renew mdx include telemetry metrics vault expire revoke by token mdx include telemetry metrics vault expire revoke force mdx include telemetry metrics vault expire revoke prefix mdx include telemetry metrics vault expire revoke mdx include telemetry metrics vault gcs delete mdx include telemetry metrics vault gcs get mdx include telemetry metrics vault gcs list mdx include telemetry metrics vault gcs lock lock mdx include telemetry metrics vault gcs lock unlock mdx include telemetry metrics vault gcs lock value mdx include telemetry metrics vault gcs put mdx include telemetry metrics vault ha rpc client echo mdx include telemetry metrics vault ha rpc client echo errors mdx include telemetry metrics vault ha rpc client forward mdx include telemetry metrics vault ha rpc client forward errors mdx include telemetry metrics vault identity entity active monthly mdx include telemetry metrics vault identity entity active partial month mdx include telemetry metrics vault identity entity active reporting period mdx include telemetry metrics vault identity entity alias count mdx include telemetry metrics vault identity entity count mdx include telemetry metrics vault identity entity creation mdx include telemetry metrics vault identity num entities mdx include telemetry metrics vault identity pki acme monthly mdx include telemetry metrics vault identity pki acme reporting period mdx include telemetry metrics vault identity secret sync monthly mdx include telemetry metrics vault identity secret sync reporting period mdx include telemetry metrics vault identity upsert entity txn mdx include telemetry metrics vault identity upsert group txn mdx include telemetry metrics vault logshipper buffer length mdx include telemetry metrics vault logshipper buffer max length mdx include telemetry metrics vault logshipper buffer max size mdx include telemetry metrics vault logshipper buffer size mdx include telemetry metrics vault logshipper streamwals guard found mdx include telemetry metrics vault logshipper streamwals missing guard mdx include telemetry metrics vault logshipper streamwals scanned entries mdx include telemetry metrics vault merkle flushdirty mdx include telemetry metrics vault merkle flushdirty num pages mdx include telemetry metrics vault merkle flushdirty outstanding pages mdx include telemetry metrics vault merkle savecheckpoint mdx include telemetry metrics vault merkle savecheckpoint num dirty mdx include telemetry metrics vault metrics collection mdx include telemetry metrics vault metrics collection error mdx include telemetry metrics vault metrics collection interval mdx include telemetry metrics vault mssql delete mdx include telemetry metrics vault mssql get mdx include telemetry metrics vault mssql list mdx include telemetry metrics vault mssql put mdx include telemetry metrics vault mysql delete mdx include telemetry metrics vault mysql get mdx include telemetry metrics vault mysql list mdx include telemetry metrics vault mysql put mdx include telemetry metrics vault policy delete policy mdx include telemetry metrics vault policy get policy mdx include telemetry metrics vault policy list policies mdx include telemetry metrics vault policy set policy mdx include telemetry metrics vault postgres delete mdx include telemetry metrics vault postgres get mdx include telemetry metrics vault postgres list mdx include telemetry metrics vault postgres put mdx include telemetry metrics vault quota lease count counter mdx include telemetry metrics vault quota lease count max mdx include telemetry metrics vault quota lease count violation mdx include telemetry metrics vault quota rate limit violation mdx include telemetry metrics vault raft storage bolt cursor count mdx include telemetry metrics vault raft storage bolt freelist allocated bytes mdx include telemetry metrics vault raft storage bolt freelist free pages mdx include telemetry metrics vault raft storage bolt freelist pending pages mdx include telemetry metrics vault raft storage bolt freelist used bytes mdx include telemetry metrics vault raft storage bolt node count mdx include telemetry metrics vault raft storage bolt node dereferences mdx include telemetry metrics vault raft storage bolt page bytes allocated mdx include telemetry metrics vault raft storage bolt page count mdx include telemetry metrics vault raft storage bolt rebalance count mdx include telemetry metrics vault raft storage bolt rebalance time mdx include telemetry metrics vault raft storage bolt spill count mdx include telemetry metrics vault raft storage bolt spill time mdx include telemetry metrics vault raft storage bolt split count mdx include telemetry metrics vault raft storage bolt transaction currently open read transactions mdx include telemetry metrics vault raft storage bolt transaction started read transactions mdx include telemetry metrics vault raft storage bolt write count mdx include telemetry metrics vault raft storage bolt write time mdx include telemetry metrics vault raft storage follower applied index delta mdx include telemetry metrics vault raft storage follower last heartbeat ms mdx include telemetry metrics vault raft storage stats applied index mdx include telemetry metrics vault raft storage stats commit index mdx include telemetry metrics vault raft storage stats fsm pending mdx include telemetry metrics vault raft storage delete mdx include telemetry metrics vault raft storage entry size mdx include telemetry metrics vault raft storage get mdx include telemetry metrics vault raft storage list mdx include telemetry metrics vault raft storage put mdx include telemetry metrics vault raft storage transaction mdx include telemetry metrics vault raft wal head truncations mdx include telemetry metrics vault raft wal tail truncations mdx include telemetry metrics vault raft wal log entries read mdx include telemetry metrics vault raft wal log entries written mdx include telemetry metrics vault raft wal log entry bytes read mdx include telemetry metrics vault raft wal log entry bytes written mdx include telemetry metrics vault raft wal stable gets mdx include telemetry metrics vault raft wal stable sets mdx include telemetry metrics vault raft wal log appends mdx include telemetry metrics vault raft wal segment rotations mdx include telemetry metrics vault raft wal last segment age seconds mdx include telemetry metrics vault raft apply mdx include telemetry metrics vault raft barrier mdx include telemetry metrics vault raft candidate electself mdx include telemetry metrics vault raft commitnumlogs mdx include telemetry metrics vault raft committime mdx include telemetry metrics vault raft compactlogs mdx include telemetry metrics vault raft fsm apply mdx include telemetry metrics vault raft fsm applybatch mdx include telemetry metrics vault raft fsm applybatchnum mdx include telemetry metrics vault raft fsm enqueue mdx include telemetry metrics vault raft fsm restore mdx include telemetry metrics vault raft fsm snapshot mdx include telemetry metrics vault raft fsm store config mdx include telemetry metrics vault raft get mdx include telemetry metrics vault raft leader dispatchlog mdx include telemetry metrics vault raft leader dispatchnumlogs mdx include telemetry metrics vault raft leader lastcontact mdx include telemetry metrics vault raft list mdx include telemetry metrics vault raft peers mdx include telemetry metrics vault raft replication appendentries log mdx include telemetry metrics vault raft replication appendentries rpc mdx include telemetry metrics vault raft replication heartbeat mdx include telemetry metrics vault raft replication installsnapshot mdx include telemetry metrics vault raft restore mdx include telemetry metrics vault raft restoreusersnapshot mdx include telemetry metrics vault raft rpc appendentries mdx include telemetry metrics vault raft rpc appendentries processlogs mdx include telemetry metrics vault raft rpc appendentries storelogs mdx include telemetry metrics vault raft rpc installsnapshot mdx include telemetry metrics vault raft rpc processheartbeat mdx include telemetry metrics vault raft rpc requestvote mdx include telemetry metrics vault raft snapshot create mdx include telemetry metrics vault raft snapshot persist mdx include telemetry metrics vault raft snapshot takesnapshot mdx include telemetry metrics vault raft state candidate mdx include telemetry metrics vault raft state follower mdx include telemetry metrics vault raft state leader mdx include telemetry metrics vault raft transition heartbeat timeout mdx include telemetry metrics vault raft transition leader lease timeout mdx include telemetry metrics vault raft verify leader mdx include telemetry metrics vault replication fetchremotekeys mdx include telemetry metrics vault replication fsm last remote wal mdx include telemetry metrics vault replication fsm last upstream remote wal mdx include telemetry metrics vault replication merkle commit index mdx include telemetry metrics vault replication merklediff mdx include telemetry metrics vault replication merklesync mdx include telemetry metrics vault replication rpc client conflicting pages mdx include telemetry metrics vault replication rpc client create token register auth lease mdx include telemetry metrics vault replication rpc client fetch keys mdx include telemetry metrics vault replication rpc client forward mdx include telemetry metrics vault replication rpc client guard hash mdx include telemetry metrics vault replication rpc client persist alias mdx include telemetry metrics vault replication rpc client register auth mdx include telemetry metrics vault replication rpc client register lease mdx include telemetry metrics vault replication rpc client save mfa response auth mdx include telemetry metrics vault replication rpc client stream wals mdx include telemetry metrics vault replication rpc client sub page hashes mdx include telemetry metrics vault replication rpc client sync counter mdx include telemetry metrics vault replication rpc client upsert group mdx include telemetry metrics vault replication rpc client wrap in cubbyhole mdx include telemetry metrics vault replication rpc dr server echo mdx include telemetry metrics vault replication rpc dr server fetch keys request mdx include telemetry metrics vault replication rpc server auth request mdx include telemetry metrics vault replication rpc server bootstrap request mdx include telemetry metrics vault replication rpc server conflicting pages request mdx include telemetry metrics vault replication rpc server echo mdx include telemetry metrics vault replication rpc server last heartbeat mdx include telemetry metrics vault replication rpc server forwarding request mdx include telemetry metrics vault replication rpc server guard hash request mdx include telemetry metrics vault replication rpc server persist alias request mdx include telemetry metrics vault replication rpc server persist persona request mdx include telemetry metrics vault replication rpc server save mfa response auth mdx include telemetry metrics vault replication rpc server stream wals request mdx include telemetry metrics vault replication rpc server sub page hashes request mdx include telemetry metrics vault replication rpc server sync counter request mdx include telemetry metrics vault replication rpc server upsert group request mdx include telemetry metrics vault replication rpc standby server create token register auth lease request mdx include telemetry metrics vault replication rpc standby server echo mdx include telemetry metrics vault replication rpc standby server register auth request mdx include telemetry metrics vault replication rpc standby server register lease request mdx include telemetry metrics vault replication rpc standby server wrap token request mdx include telemetry metrics vault replication wal gc mdx include telemetry metrics vault replication wal last dr wal mdx include telemetry metrics vault replication wal last performance wal mdx include telemetry metrics vault replication wal last wal mdx include telemetry metrics vault rollback attempt mountpoint mdx include telemetry metrics vault rollback attempt mdx include telemetry metrics vault rollback inflight mdx include telemetry metrics vault rollback queued mdx include telemetry metrics vault rollback waiting mdx include telemetry metrics vault route create mountpoint mdx include telemetry metrics vault route delete mountpoint mdx include telemetry metrics vault route list mountpoint mdx include telemetry metrics vault route read mountpoint mdx include telemetry metrics vault route rollback mountpoint mdx include telemetry metrics vault route rollback mdx include telemetry metrics vault runtime alloc bytes mdx include telemetry metrics vault runtime free count mdx include telemetry metrics vault runtime gc pause ns mdx include telemetry metrics vault runtime heap objects mdx include telemetry metrics vault runtime malloc count mdx include telemetry metrics vault runtime num goroutines mdx include telemetry metrics vault runtime sys bytes mdx include telemetry metrics vault runtime total gc pause ns mdx include telemetry metrics vault runtime total gc runs mdx include telemetry metrics vault s3 delete mdx include telemetry metrics vault s3 get mdx include telemetry metrics vault s3 list mdx include telemetry metrics vault s3 put mdx include telemetry metrics vault secret kv count mdx include telemetry metrics vault secret lease creation mdx include telemetry metrics vault secrets sync destinations mdx include telemetry metrics vault secrets sync associations mdx include telemetry metrics vault spanner delete mdx include telemetry metrics vault spanner get mdx include telemetry metrics vault spanner list mdx include telemetry metrics vault spanner lock lock mdx include telemetry metrics vault spanner lock unlock mdx include telemetry metrics vault spanner lock value mdx include telemetry metrics vault spanner put mdx include telemetry metrics vault swift delete mdx include telemetry metrics vault swift get mdx include telemetry metrics vault swift list mdx include telemetry metrics vault swift put mdx include telemetry metrics vault token count mdx include telemetry metrics vault token count by auth mdx include telemetry metrics vault token count by policy mdx include telemetry metrics vault token count by ttl mdx include telemetry metrics vault token create root mdx include telemetry metrics vault token create mdx include telemetry metrics vault token createaccessor mdx include telemetry metrics vault token creation mdx include telemetry metrics vault token lookup mdx include telemetry metrics vault token revoke tree mdx include telemetry metrics vault token revoke mdx include telemetry metrics vault token store mdx include telemetry metrics vault wal deletewals mdx include telemetry metrics vault wal flushready mdx include telemetry metrics vault wal flushready queue len mdx include telemetry metrics vault wal gc deleted mdx include telemetry metrics vault wal gc total mdx include telemetry metrics vault wal loadwal mdx include telemetry metrics vault wal persistwals mdx include telemetry metrics vault wal write controller d mdx include telemetry metrics vault wal write controller i mdx include telemetry metrics vault wal write controller p mdx include telemetry metrics vault wal write controller reject fraction mdx include telemetry metrics vault zookeeper delete mdx include telemetry metrics vault zookeeper get mdx include telemetry metrics vault zookeeper list mdx include telemetry metrics vault zookeeper put mdx |
vault Availability telemetry provides information about standby and active nodes in Technical reference for availability related telemetry values Availability telemetry layout docs page title Telemetry reference Availability | ---
layout: docs
page_title: "Telemetry reference: Availability"
description: >-
Technical reference for availability related telemetry values.
---
# Availability telemetry
Availability telemetry provides information about standby and active nodes in
your Vault instance. Enterprise installations also include
[replication](/vault/docs/enterprise/replication) metrics.
## Default metrics
@include 'telemetry-metrics/vault/ha/rpc/client/echo.mdx'
@include 'telemetry-metrics/vault/ha/rpc/client/echo/errors.mdx'
@include 'telemetry-metrics/vault/ha/rpc/client/forward.mdx'
@include 'telemetry-metrics/vault/ha/rpc/client/forward/errors.mdx'
## Merkle tree metrics
@include 'telemetry-metrics/vault/merkle/flushdirty.mdx'
@include 'telemetry-metrics/vault/merkle/flushdirty/num_pages.mdx'
@include 'telemetry-metrics/vault/merkle/flushdirty/outstanding_pages.mdx'
@include 'telemetry-metrics/vault/merkle/savecheckpoint.mdx'
@include 'telemetry-metrics/vault/merkle/savecheckpoint/num_dirty.mdx'
## Write-ahead log (WAL) telemetry
@include 'telemetry-metrics/vault/wal/deletewals.mdx'
@include 'telemetry-metrics/vault/wal/flushready.mdx'
@include 'telemetry-metrics/vault/wal/flushready/queue_len.mdx'
@include 'telemetry-metrics/vault/wal/gc/deleted.mdx'
@include 'telemetry-metrics/vault/wal/gc/total.mdx'
@include 'telemetry-metrics/vault/wal/loadwal.mdx'
@include 'telemetry-metrics/vault/wal/persistwals.mdx'
@include 'telemetry-metrics/vault/wal/write_controller/d.mdx'
@include 'telemetry-metrics/vault/wal/write_controller/i.mdx'
@include 'telemetry-metrics/vault/wal/write_controller/p.mdx'
@include 'telemetry-metrics/vault/wal/write_controller/reject_fraction.mdx'
## Log shipping metrics
@include 'telemetry-metrics/vault/logshipper/buffer/length.mdx'
@include 'telemetry-metrics/vault/logshipper/buffer/max_length.mdx'
@include 'telemetry-metrics/vault/logshipper/buffer/max_size.mdx'
@include 'telemetry-metrics/vault/logshipper/buffer/size.mdx'
@include 'telemetry-metrics/vault/logshipper/streamwals/guard_found.mdx'
@include 'telemetry-metrics/vault/logshipper/streamwals/missing_guard.mdx'
@include 'telemetry-metrics/vault/logshipper/streamwals/scanned_entries.mdx'
## Replication metrics <EnterpriseAlert product="vault" inline />
@include 'telemetry-metrics/replication-note.mdx'
@include 'telemetry-metrics/vault/replication/fetchremotekeys.mdx'
@include 'telemetry-metrics/vault/replication/fsm/last_remote_wal.mdx'
@include 'telemetry-metrics/vault/replication/fsm/last_upstream_remote_wal.mdx'
@include 'telemetry-metrics/vault/replication/merkle/commit_index.mdx'
@include 'telemetry-metrics/vault/replication/merklediff.mdx'
@include 'telemetry-metrics/vault/replication/merklesync.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/conflicting_pages.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/create_token_register_auth_lease.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/fetch_keys.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/forward.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/guard_hash.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/persist_alias.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/register_auth.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/register_lease.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/save_mfa_response_auth.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/stream_wals.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/sub_page_hashes.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/sync_counter.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/upsert_group.mdx'
@include 'telemetry-metrics/vault/replication/rpc/client/wrap_in_cubbyhole.mdx'
@include 'telemetry-metrics/vault/replication/rpc/dr/server/echo.mdx'
@include 'telemetry-metrics/vault/replication/rpc/dr/server/fetch_keys_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/auth_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/bootstrap_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/conflicting_pages_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/echo.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/last_heartbeat.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/forwarding_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/guard_hash_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/persist_alias_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/persist_persona_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/save_mfa_response_auth.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/stream_wals_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/sub_page_hashes_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/sync_counter_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/server/upsert_group_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/standby/server/create_token_register_auth_lease_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/standby/server/echo.mdx'
@include 'telemetry-metrics/vault/replication/rpc/standby/server/register_auth_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/standby/server/register_lease_request.mdx'
@include 'telemetry-metrics/vault/replication/rpc/standby/server/wrap_token_request.mdx'
@include 'telemetry-metrics/vault/replication/wal/gc.mdx'
@include 'telemetry-metrics/vault/replication/wal/last_dr_wal.mdx'
@include 'telemetry-metrics/vault/replication/wal/last_performance_wal.mdx'
@include 'telemetry-metrics/vault/replication/wal/last_wal.mdx' | vault | layout docs page title Telemetry reference Availability description Technical reference for availability related telemetry values Availability telemetry Availability telemetry provides information about standby and active nodes in your Vault instance Enterprise installations also include replication vault docs enterprise replication metrics Default metrics include telemetry metrics vault ha rpc client echo mdx include telemetry metrics vault ha rpc client echo errors mdx include telemetry metrics vault ha rpc client forward mdx include telemetry metrics vault ha rpc client forward errors mdx Merkle tree metrics include telemetry metrics vault merkle flushdirty mdx include telemetry metrics vault merkle flushdirty num pages mdx include telemetry metrics vault merkle flushdirty outstanding pages mdx include telemetry metrics vault merkle savecheckpoint mdx include telemetry metrics vault merkle savecheckpoint num dirty mdx Write ahead log WAL telemetry include telemetry metrics vault wal deletewals mdx include telemetry metrics vault wal flushready mdx include telemetry metrics vault wal flushready queue len mdx include telemetry metrics vault wal gc deleted mdx include telemetry metrics vault wal gc total mdx include telemetry metrics vault wal loadwal mdx include telemetry metrics vault wal persistwals mdx include telemetry metrics vault wal write controller d mdx include telemetry metrics vault wal write controller i mdx include telemetry metrics vault wal write controller p mdx include telemetry metrics vault wal write controller reject fraction mdx Log shipping metrics include telemetry metrics vault logshipper buffer length mdx include telemetry metrics vault logshipper buffer max length mdx include telemetry metrics vault logshipper buffer max size mdx include telemetry metrics vault logshipper buffer size mdx include telemetry metrics vault logshipper streamwals guard found mdx include telemetry metrics vault logshipper streamwals missing guard mdx include telemetry metrics vault logshipper streamwals scanned entries mdx Replication metrics EnterpriseAlert product vault inline include telemetry metrics replication note mdx include telemetry metrics vault replication fetchremotekeys mdx include telemetry metrics vault replication fsm last remote wal mdx include telemetry metrics vault replication fsm last upstream remote wal mdx include telemetry metrics vault replication merkle commit index mdx include telemetry metrics vault replication merklediff mdx include telemetry metrics vault replication merklesync mdx include telemetry metrics vault replication rpc client conflicting pages mdx include telemetry metrics vault replication rpc client create token register auth lease mdx include telemetry metrics vault replication rpc client fetch keys mdx include telemetry metrics vault replication rpc client forward mdx include telemetry metrics vault replication rpc client guard hash mdx include telemetry metrics vault replication rpc client persist alias mdx include telemetry metrics vault replication rpc client register auth mdx include telemetry metrics vault replication rpc client register lease mdx include telemetry metrics vault replication rpc client save mfa response auth mdx include telemetry metrics vault replication rpc client stream wals mdx include telemetry metrics vault replication rpc client sub page hashes mdx include telemetry metrics vault replication rpc client sync counter mdx include telemetry metrics vault replication rpc client upsert group mdx include telemetry metrics vault replication rpc client wrap in cubbyhole mdx include telemetry metrics vault replication rpc dr server echo mdx include telemetry metrics vault replication rpc dr server fetch keys request mdx include telemetry metrics vault replication rpc server auth request mdx include telemetry metrics vault replication rpc server bootstrap request mdx include telemetry metrics vault replication rpc server conflicting pages request mdx include telemetry metrics vault replication rpc server echo mdx include telemetry metrics vault replication rpc server last heartbeat mdx include telemetry metrics vault replication rpc server forwarding request mdx include telemetry metrics vault replication rpc server guard hash request mdx include telemetry metrics vault replication rpc server persist alias request mdx include telemetry metrics vault replication rpc server persist persona request mdx include telemetry metrics vault replication rpc server save mfa response auth mdx include telemetry metrics vault replication rpc server stream wals request mdx include telemetry metrics vault replication rpc server sub page hashes request mdx include telemetry metrics vault replication rpc server sync counter request mdx include telemetry metrics vault replication rpc server upsert group request mdx include telemetry metrics vault replication rpc standby server create token register auth lease request mdx include telemetry metrics vault replication rpc standby server echo mdx include telemetry metrics vault replication rpc standby server register auth request mdx include telemetry metrics vault replication rpc standby server register lease request mdx include telemetry metrics vault replication rpc standby server wrap token request mdx include telemetry metrics vault replication wal gc mdx include telemetry metrics vault replication wal last dr wal mdx include telemetry metrics vault replication wal last performance wal mdx include telemetry metrics vault replication wal last wal mdx |
vault page title Telemetry reference Core system metrics Core system telemetry Core system telemetry provides information about the operational health of your layout docs Technical reference for core system telemetry values | ---
layout: docs
page_title: "Telemetry reference: Core system metrics"
description: >-
Technical reference for core system telemetry values.
---
# Core system telemetry
Core system telemetry provides information about the operational health of your
Vault instance.
## Default metrics
@include 'telemetry-metrics/vault/core/active.mdx'
@include 'telemetry-metrics/vault/core/activity/fragment_size.mdx'
@include 'telemetry-metrics/vault/core/activity/segment_write.mdx'
@include 'telemetry-metrics/vault/core/check_token.mdx'
@include 'telemetry-metrics/vault/core/fetch_acl_and_token.mdx'
@include 'telemetry-metrics/vault/core/handle_login_request.mdx'
@include 'telemetry-metrics/vault/core/handle_request.mdx'
@include 'telemetry-metrics/vault/core/in_flight_requests.mdx'
@include 'telemetry-metrics/vault/core/leadership_lost.mdx'
@include 'telemetry-metrics/vault/core/leadership_setup_failed.mdx'
@include 'telemetry-metrics/vault/core/license/expiration_time_epoch.mdx'
@include 'telemetry-metrics/vault/core/locked_users.mdx'
@include 'telemetry-metrics/vault/core/mount_table/num_entries.mdx'
@include 'telemetry-metrics/vault/core/mount_table/size.mdx'
@include 'telemetry-metrics/vault/core/performance_standby.mdx'
@include 'telemetry-metrics/vault/core/replication/dr/primary.mdx'
@include 'telemetry-metrics/vault/core/replication/dr/secondary.mdx'
@include 'telemetry-metrics/vault/core/replication/performance/primary.mdx'
@include 'telemetry-metrics/vault/core/replication/performance/secondary.mdx'
@include 'telemetry-metrics/vault/core/replication/write_undo_logs.mdx'
@include 'telemetry-metrics/vault/core/step_down.mdx'
## Barrier metrics
@include 'telemetry-metrics/vault/barrier/delete.mdx'
@include 'telemetry-metrics/vault/barrier/estimated_encryptions.mdx'
@include 'telemetry-metrics/vault/barrier/get.mdx'
@include 'telemetry-metrics/vault/barrier/list.mdx'
@include 'telemetry-metrics/vault/barrier/put.mdx'
## Caching metrics
@include 'telemetry-metrics/vault/cache/delete.mdx'
@include 'telemetry-metrics/vault/cache/hit.mdx'
@include 'telemetry-metrics/vault/cache/miss.mdx'
@include 'telemetry-metrics/vault/cache/write.mdx'
## Metric collection metrics
@include 'telemetry-metrics/vault/metrics/collection.mdx'
@include 'telemetry-metrics/vault/metrics/collection/error.mdx'
@include 'telemetry-metrics/vault/metrics/collection/interval.mdx'
## Quota metrics
@include 'telemetry-metrics/quota-intro.mdx'
@include 'telemetry-metrics/vault/quota/lease_count/counter.mdx'
@include 'telemetry-metrics/vault/quota/lease_count/max.mdx'
@include 'telemetry-metrics/vault/quota/lease_count/violation.mdx'
@include 'telemetry-metrics/vault/quota/rate_limit/violation.mdx'
## Request limiter metrics
@include 'telemetry-metrics/request-limiter-intro.mdx'
@include 'telemetry-metrics/vault/core/request-limiter/write.mdx'
@include 'telemetry-metrics/vault/core/request-limiter/special_path.mdx'
@include 'telemetry-metrics/vault/core/request-limiter/service_unavailable.mdx'
@include 'telemetry-metrics/vault/core/request-limiter/success.mdx'
@include 'telemetry-metrics/vault/core/request-limiter/dropped.mdx'
@include 'telemetry-metrics/vault/core/request-limiter/ignored.mdx'
## Rollback metrics
@include 'telemetry-metrics/rollback-intro.mdx'
@include 'telemetry-metrics/vault/rollback/attempt/mountpoint.mdx'
@include 'telemetry-metrics/vault/rollback/attempt.mdx'
@include 'telemetry-metrics/vault/rollback/inflight.mdx'
@include 'telemetry-metrics/vault/rollback/queued.mdx'
@include 'telemetry-metrics/vault/rollback/waiting.mdx'
## Route metrics
@include 'telemetry-metrics/route-intro.mdx'
@include 'telemetry-metrics/vault/route/create/mountpoint.mdx'
@include 'telemetry-metrics/vault/route/delete/mountpoint.mdx'
@include 'telemetry-metrics/vault/route/list/mountpoint.mdx'
@include 'telemetry-metrics/vault/route/read/mountpoint.mdx'
@include 'telemetry-metrics/vault/route/rollback/mountpoint.mdx'
@include 'telemetry-metrics/vault/route/rollback.mdx'
## Runtime metrics
@include 'telemetry-metrics/runtime-note.mdx'
@include 'telemetry-metrics/vault/runtime/alloc_bytes.mdx'
@include 'telemetry-metrics/vault/runtime/free_count.mdx'
@include 'telemetry-metrics/vault/runtime/gc_pause_ns.mdx'
@include 'telemetry-metrics/vault/runtime/heap_objects.mdx'
@include 'telemetry-metrics/vault/runtime/malloc_count.mdx'
@include 'telemetry-metrics/vault/runtime/num_goroutines.mdx'
@include 'telemetry-metrics/vault/runtime/sys_bytes.mdx'
@include 'telemetry-metrics/vault/runtime/total_gc_pause_ns.mdx'
@include 'telemetry-metrics/vault/runtime/total_gc_runs.mdx'
## Seal metrics
@include 'telemetry-metrics/vault/core/post_unseal.mdx'
@include 'telemetry-metrics/vault/core/pre_seal.mdx'
@include 'telemetry-metrics/vault/core/seal_encrypt.mdx'
@include 'telemetry-metrics/vault/core/seal_decrypt.mdx'
@include 'telemetry-metrics/vault/core/seal_internal.mdx'
@include 'telemetry-metrics/vault/core/seal_unreachable.mdx'
@include 'telemetry-metrics/vault/core/seal_with_request.mdx'
@include 'telemetry-metrics/vault/core/unseal.mdx'
@include 'telemetry-metrics/vault/core/unsealed.mdx' | vault | layout docs page title Telemetry reference Core system metrics description Technical reference for core system telemetry values Core system telemetry Core system telemetry provides information about the operational health of your Vault instance Default metrics include telemetry metrics vault core active mdx include telemetry metrics vault core activity fragment size mdx include telemetry metrics vault core activity segment write mdx include telemetry metrics vault core check token mdx include telemetry metrics vault core fetch acl and token mdx include telemetry metrics vault core handle login request mdx include telemetry metrics vault core handle request mdx include telemetry metrics vault core in flight requests mdx include telemetry metrics vault core leadership lost mdx include telemetry metrics vault core leadership setup failed mdx include telemetry metrics vault core license expiration time epoch mdx include telemetry metrics vault core locked users mdx include telemetry metrics vault core mount table num entries mdx include telemetry metrics vault core mount table size mdx include telemetry metrics vault core performance standby mdx include telemetry metrics vault core replication dr primary mdx include telemetry metrics vault core replication dr secondary mdx include telemetry metrics vault core replication performance primary mdx include telemetry metrics vault core replication performance secondary mdx include telemetry metrics vault core replication write undo logs mdx include telemetry metrics vault core step down mdx Barrier metrics include telemetry metrics vault barrier delete mdx include telemetry metrics vault barrier estimated encryptions mdx include telemetry metrics vault barrier get mdx include telemetry metrics vault barrier list mdx include telemetry metrics vault barrier put mdx Caching metrics include telemetry metrics vault cache delete mdx include telemetry metrics vault cache hit mdx include telemetry metrics vault cache miss mdx include telemetry metrics vault cache write mdx Metric collection metrics include telemetry metrics vault metrics collection mdx include telemetry metrics vault metrics collection error mdx include telemetry metrics vault metrics collection interval mdx Quota metrics include telemetry metrics quota intro mdx include telemetry metrics vault quota lease count counter mdx include telemetry metrics vault quota lease count max mdx include telemetry metrics vault quota lease count violation mdx include telemetry metrics vault quota rate limit violation mdx Request limiter metrics include telemetry metrics request limiter intro mdx include telemetry metrics vault core request limiter write mdx include telemetry metrics vault core request limiter special path mdx include telemetry metrics vault core request limiter service unavailable mdx include telemetry metrics vault core request limiter success mdx include telemetry metrics vault core request limiter dropped mdx include telemetry metrics vault core request limiter ignored mdx Rollback metrics include telemetry metrics rollback intro mdx include telemetry metrics vault rollback attempt mountpoint mdx include telemetry metrics vault rollback attempt mdx include telemetry metrics vault rollback inflight mdx include telemetry metrics vault rollback queued mdx include telemetry metrics vault rollback waiting mdx Route metrics include telemetry metrics route intro mdx include telemetry metrics vault route create mountpoint mdx include telemetry metrics vault route delete mountpoint mdx include telemetry metrics vault route list mountpoint mdx include telemetry metrics vault route read mountpoint mdx include telemetry metrics vault route rollback mountpoint mdx include telemetry metrics vault route rollback mdx Runtime metrics include telemetry metrics runtime note mdx include telemetry metrics vault runtime alloc bytes mdx include telemetry metrics vault runtime free count mdx include telemetry metrics vault runtime gc pause ns mdx include telemetry metrics vault runtime heap objects mdx include telemetry metrics vault runtime malloc count mdx include telemetry metrics vault runtime num goroutines mdx include telemetry metrics vault runtime sys bytes mdx include telemetry metrics vault runtime total gc pause ns mdx include telemetry metrics vault runtime total gc runs mdx Seal metrics include telemetry metrics vault core post unseal mdx include telemetry metrics vault core pre seal mdx include telemetry metrics vault core seal encrypt mdx include telemetry metrics vault core seal decrypt mdx include telemetry metrics vault core seal internal mdx include telemetry metrics vault core seal unreachable mdx include telemetry metrics vault core seal with request mdx include telemetry metrics vault core unseal mdx include telemetry metrics vault core unsealed mdx |
vault page title Telemetry reference Raft metrics Technical reference for integrated storage telemetry values Raft telemetry layout docs Raft telemetry provides information on | ---
layout: docs
page_title: "Telemetry reference: Raft metrics"
description: >-
Technical reference for integrated storage telemetry values.
---
# Raft telemetry
Raft telemetry provides information on
Vault [integrated storage](/vault/docs/configuration/storage/raft).
## Default metrics
@include 'telemetry-metrics/vault/raft/apply.mdx'
@include 'telemetry-metrics/vault/raft/barrier.mdx'
@include 'telemetry-metrics/vault/raft/candidate/electself.mdx'
@include 'telemetry-metrics/vault/raft/commitnumlogs.mdx'
@include 'telemetry-metrics/vault/raft/committime.mdx'
@include 'telemetry-metrics/vault/raft/compactlogs.mdx'
@include 'telemetry-metrics/vault/raft/fsm/apply.mdx'
@include 'telemetry-metrics/vault/raft/fsm/applybatch.mdx'
@include 'telemetry-metrics/vault/raft/fsm/applybatchnum.mdx'
@include 'telemetry-metrics/vault/raft/fsm/enqueue.mdx'
@include 'telemetry-metrics/vault/raft/fsm/restore.mdx'
@include 'telemetry-metrics/vault/raft/fsm/snapshot.mdx'
@include 'telemetry-metrics/vault/raft/fsm/store_config.mdx'
@include 'telemetry-metrics/vault/raft/get.mdx'
@include 'telemetry-metrics/vault/raft/list.mdx'
@include 'telemetry-metrics/vault/raft/peers.mdx'
@include 'telemetry-metrics/vault/raft/restore.mdx'
@include 'telemetry-metrics/vault/raft/restoreusersnapshot.mdx'
@include 'telemetry-metrics/vault/raft/rpc/appendentries.mdx'
@include 'telemetry-metrics/vault/raft/rpc/appendentries/processlogs.mdx'
@include 'telemetry-metrics/vault/raft/rpc/appendentries/storelogs.mdx'
@include 'telemetry-metrics/vault/raft/rpc/installsnapshot.mdx'
@include 'telemetry-metrics/vault/raft/rpc/processheartbeat.mdx'
@include 'telemetry-metrics/vault/raft/rpc/requestvote.mdx'
@include 'telemetry-metrics/vault/raft/snapshot/create.mdx'
@include 'telemetry-metrics/vault/raft/snapshot/persist.mdx'
@include 'telemetry-metrics/vault/raft/snapshot/takesnapshot.mdx'
@include 'telemetry-metrics/vault/raft/state/candidate.mdx'
@include 'telemetry-metrics/vault/raft/state/follower.mdx'
@include 'telemetry-metrics/vault/raft/state/leader.mdx'
@include 'telemetry-metrics/vault/raft/transition/heartbeat_timeout.mdx'
@include 'telemetry-metrics/vault/raft/transition/leader_lease_timeout.mdx'
@include 'telemetry-metrics/vault/raft/verify_leader.mdx'
## Autopilot metrics
@include 'telemetry-metrics/raft-autopilot-note.mdx'
@include 'telemetry-metrics/vault/autopilot/failure_tolerance.mdx'
@include 'telemetry-metrics/vault/autopilot/healthy.mdx'
@include 'telemetry-metrics/vault/autopilot/node/healthy.mdx'
## Leadership change metrics
@include 'telemetry-metrics/raft-leadership-intro.mdx'
@include 'telemetry-metrics/vault/raft/leader/dispatchlog.mdx'
@include 'telemetry-metrics/vault/raft/leader/dispatchnumlogs.mdx'
@include 'telemetry-metrics/vault/raft/leader/lastcontact.mdx'
## Raft replication metrics
@include 'telemetry-metrics/vault/raft/replication/appendentries/log.mdx'
@include 'telemetry-metrics/vault/raft/replication/appendentries/rpc.mdx'
@include 'telemetry-metrics/vault/raft/replication/heartbeat.mdx'
@include 'telemetry-metrics/vault/raft/replication/installsnapshot.mdx'
## Storage metrics
@include 'telemetry-metrics/vault/raft_storage/bolt/cursor/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/freelist/allocated_bytes.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/freelist/free_pages.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/freelist/pending_pages.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/freelist/used_bytes.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/node/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/node/dereferences.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/page/bytes_allocated.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/page/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/rebalance/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/rebalance/time.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/spill/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/spill/time.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/split/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/transaction/currently_open_read_transactions.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/transaction/started_read_transactions.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/write/count.mdx'
@include 'telemetry-metrics/vault/raft_storage/bolt/write/time.mdx'
@include 'telemetry-metrics/vault/raft_storage/follower/applied_index_delta.mdx'
@include 'telemetry-metrics/vault/raft_storage/follower/last_heartbeat_ms.mdx'
@include 'telemetry-metrics/vault/raft_storage/stats/applied_index.mdx'
@include 'telemetry-metrics/vault/raft_storage/stats/commit_index.mdx'
@include 'telemetry-metrics/vault/raft_storage/stats/fsm_pending.mdx'
@include 'telemetry-metrics/vault/raft-storage/delete.mdx'
@include 'telemetry-metrics/vault/raft-storage/entry_size.mdx'
@include 'telemetry-metrics/vault/raft-storage/get.mdx'
@include 'telemetry-metrics/vault/raft-storage/list.mdx'
@include 'telemetry-metrics/vault/raft-storage/put.mdx'
@include 'telemetry-metrics/vault/raft-storage/transaction.mdx'
## Write-ahead logging (WAL) metrics
@include 'telemetry-metrics/vault/raft-wal/head-truncations.mdx'
@include 'telemetry-metrics/vault/raft-wal/tail-truncations.mdx'
@include 'telemetry-metrics/vault/raft-wal/log-entries-read.mdx'
@include 'telemetry-metrics/vault/raft-wal/log-entries-written.mdx'
@include 'telemetry-metrics/vault/raft-wal/log-entry-bytes-read.mdx'
@include 'telemetry-metrics/vault/raft-wal/log-entry-bytes-written.mdx'
@include 'telemetry-metrics/vault/raft-wal/stable-gets.mdx'
@include 'telemetry-metrics/vault/raft-wal/stable-sets.mdx'
@include 'telemetry-metrics/vault/raft-wal/log-appends.mdx'
@include 'telemetry-metrics/vault/raft-wal/segment-rotations.mdx'
@include 'telemetry-metrics/vault/raft-wal/last-segment-age-seconds.mdx' | vault | layout docs page title Telemetry reference Raft metrics description Technical reference for integrated storage telemetry values Raft telemetry Raft telemetry provides information on Vault integrated storage vault docs configuration storage raft Default metrics include telemetry metrics vault raft apply mdx include telemetry metrics vault raft barrier mdx include telemetry metrics vault raft candidate electself mdx include telemetry metrics vault raft commitnumlogs mdx include telemetry metrics vault raft committime mdx include telemetry metrics vault raft compactlogs mdx include telemetry metrics vault raft fsm apply mdx include telemetry metrics vault raft fsm applybatch mdx include telemetry metrics vault raft fsm applybatchnum mdx include telemetry metrics vault raft fsm enqueue mdx include telemetry metrics vault raft fsm restore mdx include telemetry metrics vault raft fsm snapshot mdx include telemetry metrics vault raft fsm store config mdx include telemetry metrics vault raft get mdx include telemetry metrics vault raft list mdx include telemetry metrics vault raft peers mdx include telemetry metrics vault raft restore mdx include telemetry metrics vault raft restoreusersnapshot mdx include telemetry metrics vault raft rpc appendentries mdx include telemetry metrics vault raft rpc appendentries processlogs mdx include telemetry metrics vault raft rpc appendentries storelogs mdx include telemetry metrics vault raft rpc installsnapshot mdx include telemetry metrics vault raft rpc processheartbeat mdx include telemetry metrics vault raft rpc requestvote mdx include telemetry metrics vault raft snapshot create mdx include telemetry metrics vault raft snapshot persist mdx include telemetry metrics vault raft snapshot takesnapshot mdx include telemetry metrics vault raft state candidate mdx include telemetry metrics vault raft state follower mdx include telemetry metrics vault raft state leader mdx include telemetry metrics vault raft transition heartbeat timeout mdx include telemetry metrics vault raft transition leader lease timeout mdx include telemetry metrics vault raft verify leader mdx Autopilot metrics include telemetry metrics raft autopilot note mdx include telemetry metrics vault autopilot failure tolerance mdx include telemetry metrics vault autopilot healthy mdx include telemetry metrics vault autopilot node healthy mdx Leadership change metrics include telemetry metrics raft leadership intro mdx include telemetry metrics vault raft leader dispatchlog mdx include telemetry metrics vault raft leader dispatchnumlogs mdx include telemetry metrics vault raft leader lastcontact mdx Raft replication metrics include telemetry metrics vault raft replication appendentries log mdx include telemetry metrics vault raft replication appendentries rpc mdx include telemetry metrics vault raft replication heartbeat mdx include telemetry metrics vault raft replication installsnapshot mdx Storage metrics include telemetry metrics vault raft storage bolt cursor count mdx include telemetry metrics vault raft storage bolt freelist allocated bytes mdx include telemetry metrics vault raft storage bolt freelist free pages mdx include telemetry metrics vault raft storage bolt freelist pending pages mdx include telemetry metrics vault raft storage bolt freelist used bytes mdx include telemetry metrics vault raft storage bolt node count mdx include telemetry metrics vault raft storage bolt node dereferences mdx include telemetry metrics vault raft storage bolt page bytes allocated mdx include telemetry metrics vault raft storage bolt page count mdx include telemetry metrics vault raft storage bolt rebalance count mdx include telemetry metrics vault raft storage bolt rebalance time mdx include telemetry metrics vault raft storage bolt spill count mdx include telemetry metrics vault raft storage bolt spill time mdx include telemetry metrics vault raft storage bolt split count mdx include telemetry metrics vault raft storage bolt transaction currently open read transactions mdx include telemetry metrics vault raft storage bolt transaction started read transactions mdx include telemetry metrics vault raft storage bolt write count mdx include telemetry metrics vault raft storage bolt write time mdx include telemetry metrics vault raft storage follower applied index delta mdx include telemetry metrics vault raft storage follower last heartbeat ms mdx include telemetry metrics vault raft storage stats applied index mdx include telemetry metrics vault raft storage stats commit index mdx include telemetry metrics vault raft storage stats fsm pending mdx include telemetry metrics vault raft storage delete mdx include telemetry metrics vault raft storage entry size mdx include telemetry metrics vault raft storage get mdx include telemetry metrics vault raft storage list mdx include telemetry metrics vault raft storage put mdx include telemetry metrics vault raft storage transaction mdx Write ahead logging WAL metrics include telemetry metrics vault raft wal head truncations mdx include telemetry metrics vault raft wal tail truncations mdx include telemetry metrics vault raft wal log entries read mdx include telemetry metrics vault raft wal log entries written mdx include telemetry metrics vault raft wal log entry bytes read mdx include telemetry metrics vault raft wal log entry bytes written mdx include telemetry metrics vault raft wal stable gets mdx include telemetry metrics vault raft wal stable sets mdx include telemetry metrics vault raft wal log appends mdx include telemetry metrics vault raft wal segment rotations mdx include telemetry metrics vault raft wal last segment age seconds mdx |
vault Technical reference for individual storage plugin telemetry values Storage plugin telemetry Storage telemetry provides information on the health of Vault storage and your page title Telemetry reference Storage plugin metrics layout docs | ---
layout: docs
page_title: "Telemetry reference: Storage plugin metrics"
description: >-
Technical reference for individual storage plugin telemetry values.
---
# Storage plugin telemetry
Storage telemetry provides information on the health of Vault storage and your
configured storage backends. For integrated storage metrics, refer to the
[Raft telemetry](/vault/docs/internals/metrics/raft) metric list.
## Barrier metrics
@include 'telemetry-metrics/vault/barrier/delete.mdx'
@include 'telemetry-metrics/vault/barrier/estimated_encryptions.mdx'
@include 'telemetry-metrics/vault/barrier/get.mdx'
@include 'telemetry-metrics/vault/barrier/list.mdx'
@include 'telemetry-metrics/vault/barrier/put.mdx'
## Caching metrics
@include 'telemetry-metrics/vault/cache/delete.mdx'
@include 'telemetry-metrics/vault/cache/hit.mdx'
@include 'telemetry-metrics/vault/cache/miss.mdx'
@include 'telemetry-metrics/vault/cache/write.mdx'
## Amazon S3 metrics
@include 'telemetry-metrics/vault/s3/delete.mdx'
@include 'telemetry-metrics/vault/s3/get.mdx'
@include 'telemetry-metrics/vault/s3/list.mdx'
@include 'telemetry-metrics/vault/s3/put.mdx'
## Azure metrics
@include 'telemetry-metrics/vault/azure/delete.mdx'
@include 'telemetry-metrics/vault/azure/get.mdx'
@include 'telemetry-metrics/vault/azure/list.mdx'
@include 'telemetry-metrics/vault/azure/put.mdx'
## Cassandra metrics
@include 'telemetry-metrics/vault/cassandra/delete.mdx'
@include 'telemetry-metrics/vault/cassandra/get.mdx'
@include 'telemetry-metrics/vault/cassandra/list.mdx'
@include 'telemetry-metrics/vault/cassandra/put.mdx'
## Cockroach database metrics
@include 'telemetry-metrics/vault/cockroachdb/delete.mdx'
@include 'telemetry-metrics/vault/cockroachdb/get.mdx'
@include 'telemetry-metrics/vault/cockroachdb/list.mdx'
@include 'telemetry-metrics/vault/cockroachdb/put.mdx'
## Consul metrics
@include 'telemetry-metrics/vault/consul/delete.mdx'
@include 'telemetry-metrics/vault/consul/get.mdx'
@include 'telemetry-metrics/vault/consul/list.mdx'
@include 'telemetry-metrics/vault/consul/put.mdx'
@include 'telemetry-metrics/vault/consul/transaction.mdx'
## Couch database metrics
@include 'telemetry-metrics/vault/couchdb/delete.mdx'
@include 'telemetry-metrics/vault/couchdb/get.mdx'
@include 'telemetry-metrics/vault/couchdb/list.mdx'
@include 'telemetry-metrics/vault/couchdb/put.mdx'
## Dynamo database metrics
@include 'telemetry-metrics/vault/dynamodb/delete.mdx'
@include 'telemetry-metrics/vault/dynamodb/get.mdx'
@include 'telemetry-metrics/vault/dynamodb/list.mdx'
@include 'telemetry-metrics/vault/dynamodb/put.mdx'
## Etcd metrics
@include 'telemetry-metrics/vault/etcd/delete.mdx'
@include 'telemetry-metrics/vault/etcd/get.mdx'
@include 'telemetry-metrics/vault/etcd/list.mdx'
@include 'telemetry-metrics/vault/etcd/put.mdx'
## Google Cloud metrics
@include 'telemetry-metrics/vault/gcs/delete.mdx'
@include 'telemetry-metrics/vault/gcs/get.mdx'
@include 'telemetry-metrics/vault/gcs/list.mdx'
@include 'telemetry-metrics/vault/gcs/lock/lock.mdx'
@include 'telemetry-metrics/vault/gcs/lock/unlock.mdx'
@include 'telemetry-metrics/vault/gcs/lock/value.mdx'
@include 'telemetry-metrics/vault/gcs/put.mdx'
## Google Cloud - Spanner metrics
@include 'telemetry-metrics/vault/spanner/delete.mdx'
@include 'telemetry-metrics/vault/spanner/get.mdx'
@include 'telemetry-metrics/vault/spanner/list.mdx'
@include 'telemetry-metrics/vault/spanner/lock/lock.mdx'
@include 'telemetry-metrics/vault/spanner/lock/unlock.mdx'
@include 'telemetry-metrics/vault/spanner/lock/value.mdx'
@include 'telemetry-metrics/vault/spanner/put.mdx'
## Microsoft SQL Server (MSSQL) metrics
@include 'telemetry-metrics/vault/mssql/delete.mdx'
@include 'telemetry-metrics/vault/mssql/get.mdx'
@include 'telemetry-metrics/vault/mssql/list.mdx'
@include 'telemetry-metrics/vault/mssql/put.mdx'
## MySQL metrics
@include 'telemetry-metrics/vault/mysql/delete.mdx'
@include 'telemetry-metrics/vault/mysql/get.mdx'
@include 'telemetry-metrics/vault/mysql/list.mdx'
@include 'telemetry-metrics/vault/mysql/put.mdx'
## PostgreSQL metrics
@include 'telemetry-metrics/vault/postgres/delete.mdx'
@include 'telemetry-metrics/vault/postgres/get.mdx'
@include 'telemetry-metrics/vault/postgres/list.mdx'
@include 'telemetry-metrics/vault/postgres/put.mdx'
## Swift metrics
@include 'telemetry-metrics/vault/swift/delete.mdx'
@include 'telemetry-metrics/vault/swift/get.mdx'
@include 'telemetry-metrics/vault/swift/list.mdx'
@include 'telemetry-metrics/vault/swift/put.mdx'
## ZooKeeper metrics
@include 'telemetry-metrics/vault/zookeeper/delete.mdx'
@include 'telemetry-metrics/vault/zookeeper/get.mdx'
@include 'telemetry-metrics/vault/zookeeper/list.mdx'
@include 'telemetry-metrics/vault/zookeeper/put.mdx | vault | layout docs page title Telemetry reference Storage plugin metrics description Technical reference for individual storage plugin telemetry values Storage plugin telemetry Storage telemetry provides information on the health of Vault storage and your configured storage backends For integrated storage metrics refer to the Raft telemetry vault docs internals metrics raft metric list Barrier metrics include telemetry metrics vault barrier delete mdx include telemetry metrics vault barrier estimated encryptions mdx include telemetry metrics vault barrier get mdx include telemetry metrics vault barrier list mdx include telemetry metrics vault barrier put mdx Caching metrics include telemetry metrics vault cache delete mdx include telemetry metrics vault cache hit mdx include telemetry metrics vault cache miss mdx include telemetry metrics vault cache write mdx Amazon S3 metrics include telemetry metrics vault s3 delete mdx include telemetry metrics vault s3 get mdx include telemetry metrics vault s3 list mdx include telemetry metrics vault s3 put mdx Azure metrics include telemetry metrics vault azure delete mdx include telemetry metrics vault azure get mdx include telemetry metrics vault azure list mdx include telemetry metrics vault azure put mdx Cassandra metrics include telemetry metrics vault cassandra delete mdx include telemetry metrics vault cassandra get mdx include telemetry metrics vault cassandra list mdx include telemetry metrics vault cassandra put mdx Cockroach database metrics include telemetry metrics vault cockroachdb delete mdx include telemetry metrics vault cockroachdb get mdx include telemetry metrics vault cockroachdb list mdx include telemetry metrics vault cockroachdb put mdx Consul metrics include telemetry metrics vault consul delete mdx include telemetry metrics vault consul get mdx include telemetry metrics vault consul list mdx include telemetry metrics vault consul put mdx include telemetry metrics vault consul transaction mdx Couch database metrics include telemetry metrics vault couchdb delete mdx include telemetry metrics vault couchdb get mdx include telemetry metrics vault couchdb list mdx include telemetry metrics vault couchdb put mdx Dynamo database metrics include telemetry metrics vault dynamodb delete mdx include telemetry metrics vault dynamodb get mdx include telemetry metrics vault dynamodb list mdx include telemetry metrics vault dynamodb put mdx Etcd metrics include telemetry metrics vault etcd delete mdx include telemetry metrics vault etcd get mdx include telemetry metrics vault etcd list mdx include telemetry metrics vault etcd put mdx Google Cloud metrics include telemetry metrics vault gcs delete mdx include telemetry metrics vault gcs get mdx include telemetry metrics vault gcs list mdx include telemetry metrics vault gcs lock lock mdx include telemetry metrics vault gcs lock unlock mdx include telemetry metrics vault gcs lock value mdx include telemetry metrics vault gcs put mdx Google Cloud Spanner metrics include telemetry metrics vault spanner delete mdx include telemetry metrics vault spanner get mdx include telemetry metrics vault spanner list mdx include telemetry metrics vault spanner lock lock mdx include telemetry metrics vault spanner lock unlock mdx include telemetry metrics vault spanner lock value mdx include telemetry metrics vault spanner put mdx Microsoft SQL Server MSSQL metrics include telemetry metrics vault mssql delete mdx include telemetry metrics vault mssql get mdx include telemetry metrics vault mssql list mdx include telemetry metrics vault mssql put mdx MySQL metrics include telemetry metrics vault mysql delete mdx include telemetry metrics vault mysql get mdx include telemetry metrics vault mysql list mdx include telemetry metrics vault mysql put mdx PostgreSQL metrics include telemetry metrics vault postgres delete mdx include telemetry metrics vault postgres get mdx include telemetry metrics vault postgres list mdx include telemetry metrics vault postgres put mdx Swift metrics include telemetry metrics vault swift delete mdx include telemetry metrics vault swift get mdx include telemetry metrics vault swift list mdx include telemetry metrics vault swift put mdx ZooKeeper metrics include telemetry metrics vault zookeeper delete mdx include telemetry metrics vault zookeeper get mdx include telemetry metrics vault zookeeper list mdx include telemetry metrics vault zookeeper put mdx |
vault page title Audit Devices Audit devices are mountable devices that log requests and responses in Vault Audit devices are the components in Vault that collectively keep a detailed log of all layout docs requests to Vault and their responses Because every operation with Vault is an API Audit devices | ---
layout: docs
page_title: Audit Devices
description: Audit devices are mountable devices that log requests and responses in Vault.
---
# Audit devices
Audit devices are the components in Vault that collectively keep a detailed log of all
requests to Vault, and their responses. Because every operation with Vault is an API
request/response, when using a single audit device, the audit log contains _every_ interaction with
the Vault API, including errors - except for a few paths which do not go via the audit system.
The non-audited paths are:
sys/init
sys/seal-status
sys/seal
sys/step-down
sys/unseal
sys/leader
sys/health
sys/rekey/init
sys/rekey/update
sys/rekey/verify
sys/rekey-recovery-key/init
sys/rekey-recovery-key/update
sys/rekey-recovery-key/verify
sys/storage/raft/bootstrap
sys/storage/raft/join
sys/internal/ui/feature-flags
and also, if the relevant listener configuration settings allow unauthenticated access:
sys/metrics
sys/pprof/*
sys/in-flight-req
## Enabling multiple devices
When multiple audit devices are enabled, Vault will attempt to send the audit
logs to all of them. This allows you to not only have redundant copies, but also
a way to check for data tampering in the logs themselves.
Vault considers a request to be successful if it can log to *at least* one
configured audit device (see: [Blocked Audit
Devices](/vault/docs/audit#blocked-audit-devices) section below). Therefore in order
to build a complete picture of all audited actions, use the aggregate/union of
the logs from each audit device.
~> Note: It is **highly recommended** that you configure Vault to use multiple audit
devices. Audit failures can prevent Vault from servicing requests, so it is
important to provide at least one other device.
## Format
Each line in the audit log is a JSON object. The `type` field specifies what
type of object it is. Currently, only two types exist: `request` and `response`.
The line contains all of the information for any given request and response. By
default, all the sensitive information is first hashed before logging in the
audit logs.
## Sensitive information
The audit logs contain the full request and response objects for every
interaction with Vault. The request and response can be matched utilizing a
unique identifier assigned to each request.
Most strings contained within requests and responses are hashed with a salt using HMAC-SHA256.
The purpose of the hash is so that secrets aren't in plaintext within your audit logs.
However, you're still able to check the value of secrets by generating HMACs yourself;
this can be done with the audit device's hash function and salt by using the `/sys/audit-hash`
API endpoint (see the documentation for more details).
~> Currently, only strings that come from JSON or returned in JSON are
HMAC'd. Other data types, like integers, booleans, and so on, are passed
through in plaintext. We recommend that all sensitive data be provided as string values
inside all JSON sent to Vault (i.e., that integer values are provided in quotes).
While most strings are hashed, Vault can be configured to make some exceptions.
For example in auth methods and secrets engines, users can enable additional exceptions
using the [secrets enable](/vault/docs/commands/secrets/enable) command, and then
[tune](/vault/docs/commands/secrets/tune) it afterward.
**see also**:
[auth enable](/vault/docs/commands/auth/enable)
[auth tune](/vault/docs/commands/auth/tune)
## Audit request headers
Use the [Vault API](/vault/api-docs/system/config-auditing) to configure request
headers to monitor and audit headers in incoming client request.
By default, Vault **does not** encrypt request header values with HMAC if you
[create](/vault/api-docs/system/config-auditing#create-update-audit-request-header)
an exception to allow request headers in the audit log. To encrypt the header
values, you must [configure](/vault/api-docs/system/config-auditing#hmac) the
relevant headers individually.
### Default headers
To help correlate requests across distributed systems, Vault automatically
records the following headers in the audit log:
- `correlation-id`
- `x-correlation-id`
To ensure Vault uses HMAC on the header values during logging, set the `hmac` value to true for the `config/auditing/request-headers` API call.
For example, to enable HMAC for `correlation-id`
```shell
curl \
--header "X-Vault-Token: ..." \
http://127.0.0.1:8200/v1/sys/config/auditing/request-headers/correlation-id \
--data '{ "hmac": true }'
```
Another way to identify the source of a request is through the User-Agent request header.
Vault will automatically record this value as `user-agent` within the `headers` of a
request entry within the audit log.
## Enabling/Disabling audit devices
When a Vault server is first initialized, no auditing is enabled. Audit
devices must be enabled by a root user using `vault audit enable`.
When enabling an audit device, options can be passed to it to configure it.
For example, the command below enables the file audit device:
```shell-session
$ vault audit enable file file_path=/var/log/vault_audit.log
```
In the command above, we passed the "file_path" parameter to specify the path
where the audit log will be written to. Each audit device has its own
set of parameters. See the documentation to the left for more details.
~> Note: Audit device configuration is replicated to all nodes within a
cluster by default, and to performance/DR secondaries for Vault Enterprise clusters.
Before enabling an audit device, ensure that all nodes within the cluster(s)
will be able to successfully log to the audit device to avoid Vault being
blocked from serving requests.
An audit device can be limited to only within the node's cluster with the [`local`](/vault/api-docs/system/audit#local) parameter.
When an audit device is disabled, it will stop receiving logs immediately.
The existing logs that it did store are untouched.
~> Note: Once an audit device is disabled, you will no longer be able to HMAC values
for comparison with entries in the audit logs. This is true even if you re-enable
the audit device at the same path, as a new salt will be created for hashing.
## Blocked audit devices
Audit device logs are critically important and ignoring auditing failures opens an avenue for attack. Vault will not respond to requests when no enabled audit devices can record them.
Vault can distinguish between two types of audit device failures.
- A blocking failure is one where an attempt to write to the audit device never completes. This is unlikely with a local disk device, but could occure with a network-based audit device.
- When multiple audit devices are enabled, if any of them fail in a non-blocking fashion, Vault requests can still complete successfully provided at least one audit device successfully writes the audit record. If any of the audit devices fail in a blocking fashion however, Vault requests will hang until the blocking is resolved.
In other words, Vault will not complete any requests until the blocked audit device can write.
## Tutorial
Refer to [Blocked Audit Devices](/vault/tutorials/monitoring/blocked-audit-devices) for a step-by-step tutorial.
## API
Audit devices also have a full HTTP API. Please see the [Audit device API
docs](/vault/api-docs/system/audit) for more details.
## Common configuration options
@include 'audit-options-common.mdx'
## Eliding list response bodies
Some Vault responses can be very large. Primarily, this affects list operations -
as Vault lacks pagination in its APIs, listing a very large collection can result
in a response that is tens of megabytes long. Some audit backends are unable to
process individual audit records of larger sizes.
The contents of the response for a list operation is often not very interesting;
most contain only a "keys" field, containing a list of IDs. Select API endpoints
additionally return a "key_info" field, a map from ID to some additional
information about the list entry - `identity/entity/id/` is an example of this.
Even in this case, the response to a list operation is usually less-confidential
or public information, for which having the full response in the audit logs is of
lesser importance.
The `elide_list_responses` audit option provides the flexibility to not write the
full list response data from the audit log, to mitigate the creation of very long
individual audit records.
When enabled, it affects only audit records of `type=response` and
`request.operation=list`. The values of `response.data.keys` and
`response.data.key_info` will be replaced with a simple integer, recording how
many entries were contained in the list (`keys`) or map (`key_info`) - therefore
even with this feature enabled, it is still possible to see how many items were
returned by a list operation.
This extra processing only affects the response data fields `keys` and `key_info`,
and only when they have the expected data types - in the event a list response
contains data outside of the usual conventions that apply to Vault list responses,
it will be left as is by this feature.
Here is an example of an audit record that has been processed by this feature
(formatted with extra whitespace, and with fields not relevant to the example
omitted):
```json
{
"type": "response",
"request": {
"operation": "list"
},
"response": {
"data": {
"key_info": 4,
"keys": 4
}
}
}
``` | vault | layout docs page title Audit Devices description Audit devices are mountable devices that log requests and responses in Vault Audit devices Audit devices are the components in Vault that collectively keep a detailed log of all requests to Vault and their responses Because every operation with Vault is an API request response when using a single audit device the audit log contains every interaction with the Vault API including errors except for a few paths which do not go via the audit system The non audited paths are sys init sys seal status sys seal sys step down sys unseal sys leader sys health sys rekey init sys rekey update sys rekey verify sys rekey recovery key init sys rekey recovery key update sys rekey recovery key verify sys storage raft bootstrap sys storage raft join sys internal ui feature flags and also if the relevant listener configuration settings allow unauthenticated access sys metrics sys pprof sys in flight req Enabling multiple devices When multiple audit devices are enabled Vault will attempt to send the audit logs to all of them This allows you to not only have redundant copies but also a way to check for data tampering in the logs themselves Vault considers a request to be successful if it can log to at least one configured audit device see Blocked Audit Devices vault docs audit blocked audit devices section below Therefore in order to build a complete picture of all audited actions use the aggregate union of the logs from each audit device Note It is highly recommended that you configure Vault to use multiple audit devices Audit failures can prevent Vault from servicing requests so it is important to provide at least one other device Format Each line in the audit log is a JSON object The type field specifies what type of object it is Currently only two types exist request and response The line contains all of the information for any given request and response By default all the sensitive information is first hashed before logging in the audit logs Sensitive information The audit logs contain the full request and response objects for every interaction with Vault The request and response can be matched utilizing a unique identifier assigned to each request Most strings contained within requests and responses are hashed with a salt using HMAC SHA256 The purpose of the hash is so that secrets aren t in plaintext within your audit logs However you re still able to check the value of secrets by generating HMACs yourself this can be done with the audit device s hash function and salt by using the sys audit hash API endpoint see the documentation for more details Currently only strings that come from JSON or returned in JSON are HMAC d Other data types like integers booleans and so on are passed through in plaintext We recommend that all sensitive data be provided as string values inside all JSON sent to Vault i e that integer values are provided in quotes While most strings are hashed Vault can be configured to make some exceptions For example in auth methods and secrets engines users can enable additional exceptions using the secrets enable vault docs commands secrets enable command and then tune vault docs commands secrets tune it afterward see also auth enable vault docs commands auth enable auth tune vault docs commands auth tune Audit request headers Use the Vault API vault api docs system config auditing to configure request headers to monitor and audit headers in incoming client request By default Vault does not encrypt request header values with HMAC if you create vault api docs system config auditing create update audit request header an exception to allow request headers in the audit log To encrypt the header values you must configure vault api docs system config auditing hmac the relevant headers individually Default headers To help correlate requests across distributed systems Vault automatically records the following headers in the audit log correlation id x correlation id To ensure Vault uses HMAC on the header values during logging set the hmac value to true for the config auditing request headers API call For example to enable HMAC for correlation id shell curl header X Vault Token http 127 0 0 1 8200 v1 sys config auditing request headers correlation id data hmac true Another way to identify the source of a request is through the User Agent request header Vault will automatically record this value as user agent within the headers of a request entry within the audit log Enabling Disabling audit devices When a Vault server is first initialized no auditing is enabled Audit devices must be enabled by a root user using vault audit enable When enabling an audit device options can be passed to it to configure it For example the command below enables the file audit device shell session vault audit enable file file path var log vault audit log In the command above we passed the file path parameter to specify the path where the audit log will be written to Each audit device has its own set of parameters See the documentation to the left for more details Note Audit device configuration is replicated to all nodes within a cluster by default and to performance DR secondaries for Vault Enterprise clusters Before enabling an audit device ensure that all nodes within the cluster s will be able to successfully log to the audit device to avoid Vault being blocked from serving requests An audit device can be limited to only within the node s cluster with the local vault api docs system audit local parameter When an audit device is disabled it will stop receiving logs immediately The existing logs that it did store are untouched Note Once an audit device is disabled you will no longer be able to HMAC values for comparison with entries in the audit logs This is true even if you re enable the audit device at the same path as a new salt will be created for hashing Blocked audit devices Audit device logs are critically important and ignoring auditing failures opens an avenue for attack Vault will not respond to requests when no enabled audit devices can record them Vault can distinguish between two types of audit device failures A blocking failure is one where an attempt to write to the audit device never completes This is unlikely with a local disk device but could occure with a network based audit device When multiple audit devices are enabled if any of them fail in a non blocking fashion Vault requests can still complete successfully provided at least one audit device successfully writes the audit record If any of the audit devices fail in a blocking fashion however Vault requests will hang until the blocking is resolved In other words Vault will not complete any requests until the blocked audit device can write Tutorial Refer to Blocked Audit Devices vault tutorials monitoring blocked audit devices for a step by step tutorial API Audit devices also have a full HTTP API Please see the Audit device API docs vault api docs system audit for more details Common configuration options include audit options common mdx Eliding list response bodies Some Vault responses can be very large Primarily this affects list operations as Vault lacks pagination in its APIs listing a very large collection can result in a response that is tens of megabytes long Some audit backends are unable to process individual audit records of larger sizes The contents of the response for a list operation is often not very interesting most contain only a keys field containing a list of IDs Select API endpoints additionally return a key info field a map from ID to some additional information about the list entry identity entity id is an example of this Even in this case the response to a list operation is usually less confidential or public information for which having the full response in the audit logs is of lesser importance The elide list responses audit option provides the flexibility to not write the full list response data from the audit log to mitigate the creation of very long individual audit records When enabled it affects only audit records of type response and request operation list The values of response data keys and response data key info will be replaced with a simple integer recording how many entries were contained in the list keys or map key info therefore even with this feature enabled it is still possible to see how many items were returned by a list operation This extra processing only affects the response data fields keys and key info and only when they have the expected data types in the event a list response contains data outside of the usual conventions that apply to Vault list responses it will be left as is by this feature Here is an example of an audit record that has been processed by this feature formatted with extra whitespace and with fields not relevant to the example omitted json type response request operation list response data key info 4 keys 4 |
vault Password policies are used in some secret engines to allow users to define how passwords are generated Password policies for dynamic static users within those engines layout docs page title Password Policies | ---
layout: docs
page_title: Password Policies
description: >-
Password policies are used in some secret engines to allow users to define how passwords are generated
for dynamic & static users within those engines.
---
# Password policies
A password policy is a set of instructions on how to generate a password, similar to other password
generators. These password policies are used in a subset of secret engines to allow you to configure
how a password is generated for that engine. Not all secret engines utilize password policies, so check
the documentation for the engine you are using for compatibility.
**Note:** Password policies are unrelated to [Policies](/vault/docs/concepts/policies) other than sharing similar names.
Password policies are available in Vault version 1.5+. [API docs can be found here](/vault/api-docs/system/policies-password).
!> Password policies are an advanced usage of Vault. This generates credentials for external systems
(databases, LDAP, AWS, etc.) and should be used with caution.
## Design
Password policies fundamentally have two parts: a length, and a set of rules that a password must
adhere to. Passwords are randomly generated from the de-duplicated union of charsets found in all rules
and then checked against each of the rules to determine if the candidate password is valid according
to the policy. See [Candidate Password Generation](#candidate-password-generation) for details on how
passwords are generated prior to being checked against the rule set.
A rule is an assertion upon a candidate password string that indicates whether or not
the password is acceptable. For example: a "charset" rule states that a password must have at least one
lowercase letter in it. This rule will reject any passwords that do not have any lowercase letters in it.
Multiple rules may be specified within a policy to create more complex rules, such as requiring at least
one lowercase letter, at least one uppercase letter, and at least one number.
The flow looks like:
[](/img/vault-password-policy-flow.svg)
## Candidate password generation
How a candidate password is generated is extremely important. Great care must be placed to ensure that
passwords aren't created in a way that can be exploited by threat actors. This section describes how we
generate passwords within password policies to ensure that passwords are generated as securely as possible.
To generate a candidate password, three things are needed:
1. A [cryptographically secure random number generator](https://golang.org/pkg/crypto/rand/) (RNG).
2. A character set (charset) to select characters from.
3. The length of the password.
At a high level, we use our RNG to generate N numbers that correspond to indices into the charset
array where N is the length of the password we wish to create. Each value returned from the RNG is then
used to extract a character from the charset into the password.
For example, let's generate a password of length 8 from the charset `abcdefghij`:
The RNG is used to generate 8 random values. For our example let's say those values are:
`[3, 2, 0, 8, 7, 3, 5, 1]`
Each of these values is an index into the charset array:
`[3, 2, 0, 8, 7, 3, 5, 1]` => `[d, c, a, i, h, d, f, b]`
This gives us our candidate password: `dcaihdfb` which can then be run through the rules of the policy.
In a real world scenario, the values in the random array will be between `[0-255]` as that is the range of
values that a single byte can be. The value is restricted to the size of the charset array by using the
[modulo operation](https://en.wikipedia.org/wiki/Modulo_operation) to prevent referencing a character
outside the bounds of the charset. However this can introduce a problem with bias.
### Preventing bias
When using the [modulo operation](https://en.wikipedia.org/wiki/Modulo_operation) to generate a password,
you must be very careful to prevent the introduction of bias. When generating a random number between
[0-255] for a charset that has a length that isn't evenly divisible into 256, some of the first characters
in the charset may be selected more frequently than the remaining characters.
To demonstrate this, let's simplify the math. Assume that we have a charset of length 10: `abcdefghij`.
Let's also assume that our RNG generates values `[0-25]`. The first 10 values `0-9` correspond to each
character in our charset. The next 10 values `10-19` also correspond to each character in our charset.
However, the next 6 values `20-25` correspond to only the first 6 characters in the charset. This means
that those 6 characters `abcdef` can be selected more often than the last 4 characters `ghij`.
In order to prevent this from happening, we calculate the maximum value that we can allow an index to be.
This is based on the length of the charset we are selecting from. In the example above, the maximum index
value we should allow is 19 as that represents the largest integer multiple of the length of the charset
array that is less than the maximum value that our RNG can generate. When our RNG generates any values
larger than our maximum allowed value, that number is ignored and we continue to the next number. Passwords
do not lose any length because we continue generating numbers until the password is fully filled in to the
length requested.
## Performance characteristics
Characterizing password generation performance with this model is heavily dependent on the policy
configuration. In short, the more restrictive the policy, the longer it will take to generate a password.
This generalization isn't always true, but is a general guideline. The performance curve can be generalized:
`(time to generate a candidate password) * (number of candidate passwords generated)`
Where the number of times a candidate password needs to be generated is a function of how likely a given
candidate password does not pass all of the rules.
Here are some example policy configurations with their performance characteristics below. Each of these
policies have the same charset that candidate passwords are generated from (94 characters). The only
difference is the minimum number of characters for various character subsets.
<details>
<summary>No Minimum Characters</summary>
```hcl
rule "charset" {
charset = "abcdefghijklmnopqrstuvwxyz"
}
rule "charset" {
charset = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
}
rule "charset" {
charset = "0123456789"
}
rule "charset" {
charset = "!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~"
}
```
</details>
<details>
<summary>1 uppercase, 1 lowercase, 1 numeric</summary>
```hcl
rule "charset" {
charset = "abcdefghijklmnopqrstuvwxyz"
min-chars = 1
}
rule "charset" {
charset = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
min-chars = 1
}
rule "charset" {
charset = "0123456789"
min-chars = 1
}
rule "charset" {
charset = "!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~"
}
```
</details>
<details>
<summary>1 uppercase, 1 lowercase, 1 numeric, 1 from all ASCII characters</summary>
```hcl
rule "charset" {
charset = "abcdefghijklmnopqrstuvwxyz"
min-chars = 1
}
rule "charset" {
charset = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
min-chars = 1
}
rule "charset" {
charset = "0123456789"
min-chars = 1
}
rule "charset" {
charset = "!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~"
min-chars = 1
}
```
</details>
<details>
<summary>1 uppercase, 1 lowercase, 1 numeric, 1 from <code>!@#$</code></summary>
```hcl
rule "charset" {
charset = "abcdefghijklmnopqrstuvwxyz"
min-chars = 1
}
rule "charset" {
charset = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
min-chars = 1
}
rule "charset" {
charset = "0123456789"
min-chars = 1
}
rule "charset" {
charset = "!@#$"
min-chars = 1
}
# Fleshes out the rest of the symbols but doesn't add any required characters
rule "charset" {
charset = "!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~"
}
```
</details>
[](/img/vault-password-policy-performance.svg)
As more characters are generated, the amount of time increases (as seen in `No Minimum Characters`).
This upward trend can be dwarfed by restricting charsets. When a password is short, the chances of a character
being selected from a subset is smaller. For instance, if you have a 1 character password from the charset
`abcde` the chances of selecting `c` from it is 1/5. However if you have a 2 character password, the chances
of selecting `c` at least once are greater than 1/5 because you have a second chance to select `c` from
the charset.
In these examples, as the length of the password increases, the amount of time to generate a password trends
down, levels off, and then slowly increases. This is a combination of the two effects listed above: increasing
time to generate more characters vs the chances of the character subsets being selected. When a single subset is
very small (such as `!@#$`) the chances of it being selected are much smaller (4/94) than if the subset is larger
(26/94 for lowercase characters). This can result in a dramatic loss of performance.
<details>
<summary><b>Click here for more details on password generation probabilities</b></summary>
In the examples above, the charset being used to generate candidate passwords is 94 characters long.
Randomly choosing a given character from the 94 character charset has a 1/94 chance. Choosing a single
character from it after N tries (where N is the length of the password) is `1-(1-1/94)^N`.
If we expand this to look at a subset of characters (such as lowercase characters) the chances of selecting
a character from that subset is `1-(1-L/94)^N` where `L` is the length of the subset. For lowercase
characters, we get a probability of `1-(1-26/94)^N`.
If we do this for uppercase letters as well as numbers, then we get a combined probability curve:
`p = (1-(1-26/94)^N) * (1-(1-26/94)^N) * (1-(1-10/94)^N)`
[](/img/vault-password-policy-chance.svg)
It should be noted that this probability curve only applies to this specific policy. To understand the
performance characteristics of a given policy, you should run your policy with the
[`generate`](/vault/api-docs/system/policies-password) endpoint to see how much time the policy takes to
produce passwords.
</details>
## Password policy syntax
Password policies are defined in [HCL](https://github.com/hashicorp/hcl) or JSON which defines
the length of the password and a set of rules a password must adhere to.
See the [API docs](/vault/api-docs/system/policies-password) for examples of the commands to save/read/etc.
password policies
Here is a very simple policy which generates 20 character passwords from lowercase characters:
```hcl
length = 20
rule "charset" {
charset = "abcdefghijklmnopqrstuvwxyz"
}
```
Multiple rules may be specified, including multiple rules of the same type. For instance, the following
policy will generate a 20 character password with at least one lowercase letter, at least one uppercase
letter, at least one number, and at least one symbol from the set `!@#$%^&*`:
```hcl
length = 20
rule "charset" {
charset = "abcdefghijklmnopqrstuvwxyz"
min-chars = 1
}
rule "charset" {
charset = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
min-chars = 1
}
rule "charset" {
charset = "0123456789"
min-chars = 1
}
rule "charset" {
charset = "!@#$%^&*"
min-chars = 1
}
```
At least one charset must be specified for a policy to be valid. In order to generate a password, a charset
must be available to select characters from and password policies do not have a default charset.
The following policy is **NOT** valid and will be rejected:
```hcl
length = 20
```
## Configuration & available rules
### `length` parameter
- `length` `(int: <required>)` - Specifies how long the generated password will be. Must be >= 4.
Length is **not** a rule. It is the only part of the configuration that does not adhere to the guess-
and-check approach of rules.
### Rule `charset`
Allows you to specify a minimum number of characters from a given charset. For instance: a password must
have at least one lowercase letter. This rule also helps construct the charset that the password generation
utilizes. In order to generate a password, a charset must be specified.
If multiple charsets are specified, all of the charsets will be combined and de-duplicated prior to
generating any candidate passwords. Each individual `charset` rule will still need to be adhered to in
order to successfully generate passwords.
~> After combining and de-duplicating charsets, the length of the charset that candidate passwords
are generated from must be no longer than 256 characters.
#### Parameters
- `charset` `(string: <required>)` – A string representation of the character set that this rule observes.
Accepts UTF-8 compatible strings. All characters within the string must be printable.
Please note that the JSON output returned may be escaped for the special and control characters such as <,>,& etc as per the JSON specification.
- `min-chars` `(int: 0)` - Specifies a minimum number of characters required from the charset specified in
this rule. For example: if `min-chars = 2`, the password must have at least 2 characters from `charset`.
#### Example
```hcl
length = 20
rule "charset" {
charset = "abcde"
min-chars = 1
}
rule "charset" {
charset = "01234"
min-chars = 1
}
```
This policy will generate passwords from the charset `abcde01234`. However, the password must have at
least one character that is from `abcde` and at least one character from `01234`. If charsets overlap
between rules, the charsets will be de-duplicated to prevent bias towards the overlapping set.
For instance: if you have two charset rules: `abcde` & `cdefg`, the charset `abcdefg` will be used to
generate candidate passwords, but a least one character from each `abcde` & `cdefg` must still appear
in the password.
If `min-chars` is not specified (or set to `0`) then this charset will not have a minimum required number
of characters, but it will be used to select characters from. Example:
```hcl
length = 8
rule "charset" {
charset = "abcde"
}
rule "charset" {
charset = "01234"
min-chars = 1
}
```
This policy generates 8 character passwords from the charset `abcde01234` and requires at least one
character from `01234` to be in it, but does not require any characters from `abcde`. The password
`04031945` may result from this policy, even though no alphabetical characters are in it.
## Default password policy
Vault ships with a default password policy that applies to any password
generated by Vault without an explicit policy assignment. The default
policy requires passwords include:
- 20 characters total
- 1 uppercase character
- 1 lowercase character
- 1 number
- 1 special character
```hcl
length = 20
rule "charset" {
charset = "abcdefghijklmnopqrstuvwxyz"
min-chars = 1
}
rule "charset" {
charset = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
min-chars = 1
}
rule "charset" {
charset = "0123456789"
min-chars = 1
}
rule "charset" {
charset = "-"
min-chars = 1
}
```
## Tutorial
Refer to [User Configurable Password Generation for Secret
Engines](/vault/tutorials/policies/password-policies)
for a step-by-step tutorial. | vault | layout docs page title Password Policies description Password policies are used in some secret engines to allow users to define how passwords are generated for dynamic static users within those engines Password policies A password policy is a set of instructions on how to generate a password similar to other password generators These password policies are used in a subset of secret engines to allow you to configure how a password is generated for that engine Not all secret engines utilize password policies so check the documentation for the engine you are using for compatibility Note Password policies are unrelated to Policies vault docs concepts policies other than sharing similar names Password policies are available in Vault version 1 5 API docs can be found here vault api docs system policies password Password policies are an advanced usage of Vault This generates credentials for external systems databases LDAP AWS etc and should be used with caution Design Password policies fundamentally have two parts a length and a set of rules that a password must adhere to Passwords are randomly generated from the de duplicated union of charsets found in all rules and then checked against each of the rules to determine if the candidate password is valid according to the policy See Candidate Password Generation candidate password generation for details on how passwords are generated prior to being checked against the rule set A rule is an assertion upon a candidate password string that indicates whether or not the password is acceptable For example a charset rule states that a password must have at least one lowercase letter in it This rule will reject any passwords that do not have any lowercase letters in it Multiple rules may be specified within a policy to create more complex rules such as requiring at least one lowercase letter at least one uppercase letter and at least one number The flow looks like Vault Password Policy Flow img vault password policy flow svg img vault password policy flow svg Candidate password generation How a candidate password is generated is extremely important Great care must be placed to ensure that passwords aren t created in a way that can be exploited by threat actors This section describes how we generate passwords within password policies to ensure that passwords are generated as securely as possible To generate a candidate password three things are needed 1 A cryptographically secure random number generator https golang org pkg crypto rand RNG 2 A character set charset to select characters from 3 The length of the password At a high level we use our RNG to generate N numbers that correspond to indices into the charset array where N is the length of the password we wish to create Each value returned from the RNG is then used to extract a character from the charset into the password For example let s generate a password of length 8 from the charset abcdefghij The RNG is used to generate 8 random values For our example let s say those values are 3 2 0 8 7 3 5 1 Each of these values is an index into the charset array 3 2 0 8 7 3 5 1 d c a i h d f b This gives us our candidate password dcaihdfb which can then be run through the rules of the policy In a real world scenario the values in the random array will be between 0 255 as that is the range of values that a single byte can be The value is restricted to the size of the charset array by using the modulo operation https en wikipedia org wiki Modulo operation to prevent referencing a character outside the bounds of the charset However this can introduce a problem with bias Preventing bias When using the modulo operation https en wikipedia org wiki Modulo operation to generate a password you must be very careful to prevent the introduction of bias When generating a random number between 0 255 for a charset that has a length that isn t evenly divisible into 256 some of the first characters in the charset may be selected more frequently than the remaining characters To demonstrate this let s simplify the math Assume that we have a charset of length 10 abcdefghij Let s also assume that our RNG generates values 0 25 The first 10 values 0 9 correspond to each character in our charset The next 10 values 10 19 also correspond to each character in our charset However the next 6 values 20 25 correspond to only the first 6 characters in the charset This means that those 6 characters abcdef can be selected more often than the last 4 characters ghij In order to prevent this from happening we calculate the maximum value that we can allow an index to be This is based on the length of the charset we are selecting from In the example above the maximum index value we should allow is 19 as that represents the largest integer multiple of the length of the charset array that is less than the maximum value that our RNG can generate When our RNG generates any values larger than our maximum allowed value that number is ignored and we continue to the next number Passwords do not lose any length because we continue generating numbers until the password is fully filled in to the length requested Performance characteristics Characterizing password generation performance with this model is heavily dependent on the policy configuration In short the more restrictive the policy the longer it will take to generate a password This generalization isn t always true but is a general guideline The performance curve can be generalized time to generate a candidate password number of candidate passwords generated Where the number of times a candidate password needs to be generated is a function of how likely a given candidate password does not pass all of the rules Here are some example policy configurations with their performance characteristics below Each of these policies have the same charset that candidate passwords are generated from 94 characters The only difference is the minimum number of characters for various character subsets details summary No Minimum Characters summary hcl rule charset charset abcdefghijklmnopqrstuvwxyz rule charset charset ABCDEFGHIJKLMNOPQRSTUVWXYZ rule charset charset 0123456789 rule charset charset details details summary 1 uppercase 1 lowercase 1 numeric summary hcl rule charset charset abcdefghijklmnopqrstuvwxyz min chars 1 rule charset charset ABCDEFGHIJKLMNOPQRSTUVWXYZ min chars 1 rule charset charset 0123456789 min chars 1 rule charset charset details details summary 1 uppercase 1 lowercase 1 numeric 1 from all ASCII characters summary hcl rule charset charset abcdefghijklmnopqrstuvwxyz min chars 1 rule charset charset ABCDEFGHIJKLMNOPQRSTUVWXYZ min chars 1 rule charset charset 0123456789 min chars 1 rule charset charset min chars 1 details details summary 1 uppercase 1 lowercase 1 numeric 1 from code code summary hcl rule charset charset abcdefghijklmnopqrstuvwxyz min chars 1 rule charset charset ABCDEFGHIJKLMNOPQRSTUVWXYZ min chars 1 rule charset charset 0123456789 min chars 1 rule charset charset min chars 1 Fleshes out the rest of the symbols but doesn t add any required characters rule charset charset details Password Policy Performance img vault password policy performance svg img vault password policy performance svg As more characters are generated the amount of time increases as seen in No Minimum Characters This upward trend can be dwarfed by restricting charsets When a password is short the chances of a character being selected from a subset is smaller For instance if you have a 1 character password from the charset abcde the chances of selecting c from it is 1 5 However if you have a 2 character password the chances of selecting c at least once are greater than 1 5 because you have a second chance to select c from the charset In these examples as the length of the password increases the amount of time to generate a password trends down levels off and then slowly increases This is a combination of the two effects listed above increasing time to generate more characters vs the chances of the character subsets being selected When a single subset is very small such as the chances of it being selected are much smaller 4 94 than if the subset is larger 26 94 for lowercase characters This can result in a dramatic loss of performance details summary b Click here for more details on password generation probabilities b summary In the examples above the charset being used to generate candidate passwords is 94 characters long Randomly choosing a given character from the 94 character charset has a 1 94 chance Choosing a single character from it after N tries where N is the length of the password is 1 1 1 94 N If we expand this to look at a subset of characters such as lowercase characters the chances of selecting a character from that subset is 1 1 L 94 N where L is the length of the subset For lowercase characters we get a probability of 1 1 26 94 N If we do this for uppercase letters as well as numbers then we get a combined probability curve p 1 1 26 94 N 1 1 26 94 N 1 1 10 94 N Chance of Generating a Good Password 1 img vault password policy chance svg img vault password policy chance svg It should be noted that this probability curve only applies to this specific policy To understand the performance characteristics of a given policy you should run your policy with the generate vault api docs system policies password endpoint to see how much time the policy takes to produce passwords details Password policy syntax Password policies are defined in HCL https github com hashicorp hcl or JSON which defines the length of the password and a set of rules a password must adhere to See the API docs vault api docs system policies password for examples of the commands to save read etc password policies Here is a very simple policy which generates 20 character passwords from lowercase characters hcl length 20 rule charset charset abcdefghijklmnopqrstuvwxyz Multiple rules may be specified including multiple rules of the same type For instance the following policy will generate a 20 character password with at least one lowercase letter at least one uppercase letter at least one number and at least one symbol from the set hcl length 20 rule charset charset abcdefghijklmnopqrstuvwxyz min chars 1 rule charset charset ABCDEFGHIJKLMNOPQRSTUVWXYZ min chars 1 rule charset charset 0123456789 min chars 1 rule charset charset min chars 1 At least one charset must be specified for a policy to be valid In order to generate a password a charset must be available to select characters from and password policies do not have a default charset The following policy is NOT valid and will be rejected hcl length 20 Configuration available rules length parameter length int required Specifies how long the generated password will be Must be 4 Length is not a rule It is the only part of the configuration that does not adhere to the guess and check approach of rules Rule charset Allows you to specify a minimum number of characters from a given charset For instance a password must have at least one lowercase letter This rule also helps construct the charset that the password generation utilizes In order to generate a password a charset must be specified If multiple charsets are specified all of the charsets will be combined and de duplicated prior to generating any candidate passwords Each individual charset rule will still need to be adhered to in order to successfully generate passwords After combining and de duplicating charsets the length of the charset that candidate passwords are generated from must be no longer than 256 characters Parameters charset string required A string representation of the character set that this rule observes Accepts UTF 8 compatible strings All characters within the string must be printable Please note that the JSON output returned may be escaped for the special and control characters such as etc as per the JSON specification min chars int 0 Specifies a minimum number of characters required from the charset specified in this rule For example if min chars 2 the password must have at least 2 characters from charset Example hcl length 20 rule charset charset abcde min chars 1 rule charset charset 01234 min chars 1 This policy will generate passwords from the charset abcde01234 However the password must have at least one character that is from abcde and at least one character from 01234 If charsets overlap between rules the charsets will be de duplicated to prevent bias towards the overlapping set For instance if you have two charset rules abcde cdefg the charset abcdefg will be used to generate candidate passwords but a least one character from each abcde cdefg must still appear in the password If min chars is not specified or set to 0 then this charset will not have a minimum required number of characters but it will be used to select characters from Example hcl length 8 rule charset charset abcde rule charset charset 01234 min chars 1 This policy generates 8 character passwords from the charset abcde01234 and requires at least one character from 01234 to be in it but does not require any characters from abcde The password 04031945 may result from this policy even though no alphabetical characters are in it Default password policy Vault ships with a default password policy that applies to any password generated by Vault without an explicit policy assignment The default policy requires passwords include 20 characters total 1 uppercase character 1 lowercase character 1 number 1 special character hcl length 20 rule charset charset abcdefghijklmnopqrstuvwxyz min chars 1 rule charset charset ABCDEFGHIJKLMNOPQRSTUVWXYZ min chars 1 rule charset charset 0123456789 min chars 1 rule charset charset min chars 1 Tutorial Refer to User Configurable Password Generation for Secret Engines vault tutorials policies password policies for a step by step tutorial |
vault Event notifications allow Vault and plugins to exchange arbitrary activity Event Notifications page title Event Notications layout docs data within Vault and with external subscribers via WebSockets | ---
layout: docs
page_title: Event Notications
description: >-
Event notifications allow Vault and plugins to exchange arbitrary activity
data within Vault and with external subscribers via WebSockets.
---
# Event Notifications
@include 'alerts/enterprise-only.mdx'
Event notifications are arbitrary, **non-secret** data that can be exchanged between producers (Vault and plugins)
and subscribers (Vault components and external users via the API).
## Event types
<!-- This information will probably be migrated to the plugin pages eventually -->
<Note title="Note">
Event types without the `data_path` metadata field require a root token in order to be consumed from the `/v1/sys/events/subscribe/{eventType}` API endpoint.
</Note>
Internal components of Vault as well as external plugins can generate event notifications.
These are published to "event types", sometimes called "topics" in other event systems.
All event notifications of a specific event type will have the same format for their
additional `metadata` field.
The following event types are currently generated by Vault and its builtin plugins automatically:
| Plugin | Event Type | Metadata | Vault version |
|----------|-------------------------------------|------------------------------------------------|---------------|
| database | `database/config-delete` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/config-write` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/creds-create` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/reload` | `modified`, `operation`, `path`, `plugin_name` | 1.16 |
| database | `database/reset` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/role-create` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/role-delete` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/role-update` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/root-rotate-fail` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/root-rotate` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/rotate-fail` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/rotate` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/static-creds-create-fail` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/static-creds-create` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/static-role-create` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/static-role-delete` | `modified`, `operation`, `path`, `name` | 1.16 |
| database | `database/static-role-update` | `modified`, `operation`, `path`, `name` | 1.16 |
| kv | `kv-v1/delete` | `modified`, `operation`, `path` | 1.13 |
| kv | `kv-v1/write` | `data_path`, `modified`, `operation`, `path` | 1.13 |
| kv | `kv-v2/config-write` | `data_path`, `modified`, `operation`, `path` | 1.13 |
| kv | `kv-v2/data-delete` | `modified`, `operation`, `path` | 1.13 |
| kv | `kv-v2/data-patch` | `data_path`, `modified`, `operation`, `path` | 1.13 |
| kv | `kv-v2/data-write` | `data_path`, `modified`, `operation`, `path` | 1.13 |
| kv | `kv-v2/delete` | `modified`, `operation`, `path` | 1.13 |
| kv | `kv-v2/destroy` | `modified`, `operation`, `path` | 1.13 |
| kv | `kv-v2/metadata-delete` | `modified`, `operation`, `path` | 1.13 |
| kv | `kv-v2/metadata-patch` | `data_path`, `modified`, `operation`, `path` | 1.13 |
| kv | `kv-v2/metadata-write` | `data_path`, `modified`, `operation`, `path` | 1.13 |
| kv | `kv-v2/undelete` | `data_path`, `modified`, `operation`, `path` | 1.13 |
## Event notifications format
Event notifications may be formatted in protobuf binary format or as JSON.
See `EventReceived` in [`sdk/logical/event.proto`](https://github.com/hashicorp/vault/blob/main/sdk/logical/event.proto) in the relevant Vault version for the protobuf schema.
When formatted as JSON, the event notification conforms to the [CloudEvents](https://cloudevents.io/) specification.
- `id` `(string)` - CloudEvents unique identifier for the event notification. The `id` is unique for each event notification, and event notifications with the same `id` represent the same event notification.
- `source` `(string)` - CloudEvents source, which is set to `vault://` followed by the Raft node ID or the hostname of the host that generated the event notification.
- `specversion` `(string)` - The CloudEvents specification version this conforms to.
- `type` `(string)` - CloudEvents type this event notification corresponds to, which is currently always `*`.
- ` datacontenttype` `(string)` - CloudEvents content type of the event notification, which is currently always `application/json`.
- `time` `(string)` - ISO 8601-formatted timestamp for when the event notificcation was generated.
- ` data` `(object)` - Vault-specific data.
- `event` `(Event)` - contains the event notification that happened.
- `id` `(string)` - (repeat of the `id` parameter)
- `metadata` `(object)` - arbitrary extra data customized for the event type.
- `event_type` `(string)` - the event type that was published.
- `plugin_info` `(PluginInfo)` - information about the plugin that generated the event, if applicable.
- `mount_class` `(string)` - the class of plugin, e.g., `secret`, `auth`.
- `mount_accessor` `(string)` - the unique ID of the mounted plugin.
- `mount_path` `(string)` - the path that the plugin is mounted at.
- `plugin` `(string)` - the name of the plugin, e.g., `kv`.
Here is an example event notification in JSON format:
```json
{
"id": "a3be9fb1-b514-519f-5b25-b6f144a8c1ce",
"source": "vault://mycluster",
"specversion": "1.0",
"type": "*",
"data": {
"event": {
"id": "a3be9fb1-b514-519f-5b25-b6f144a8c1ce",
"metadata": {
"current_version": "1",
"data_path": "secret/data/foo",
"modified": "true",
"oldest_version": "0",
"operation": "data-write",
"path": "secret/data/foo"
}
},
"event_type": "kv-v2/data-write",
"plugin_info": {
"mount_class": "secret",
"mount_accessor": "kv_5dc4d18e",
"mount_path": "secret/",
"plugin": "kv"
}
},
"datacontentype": "application/cloudevents",
"time": "2023-09-12T15:19:49.394915-07:00"
}
```
## Subscribing to event notifications
<Note title="Note">
For multi-node Vault deployments, Vault only accepts subscriptions on the active node. If a client attempts to subscribe to events on a standby node,
Vault will respond with a redirect to the active node. Vault uses the [`api_addr`](/vault/docs/configuration#api_addr) of the active node's configuration to route the redirect.
Vault deployments with performance replication must subscribe to events on the
primary performance cluster. Vault ignores subscriptions made from secondary
clusters.
</Note>
Vault has an API endpoint, `/v1/sys/events/subscribe/{eventType}`, that allows users to subscribe to event notifications via a
WebSocket stream.
This endpoint supports the standard authentication and authorization workflows used by other Vault endpoints.
The `{eventType}` parameter is a non-empty string of what event type to subscribe to, which may contain wildcards (`*`)
to subscribe to multiple event types, e.g., `kv-v2/data-*`.
By default, the event notifications are delivered in protobuf binary format.
The endpoint can also format the data as JSON if the `json` query parameter is set to `true`:
```shell-session
$ wscat -H "X-Vault-Token: $(vault print token)" --connect 'ws://127.0.0.1:8200/v1/sys/events/subscribe/kv-v2/data-write?json=true'
{"id":"a3be9fb1-b514-519f-5b25-b6f144a8c1ce","source":"vault://mycluster","specversion":"1.0","type":"*","data":{"event":{"id":"a3be9fb1-b514-519f-5b25-b6f144a8c1ce","metadata":{"current_version":"1","data_path":"secret/data/foo","modified":"true","oldest_version":"0","operation":"data-write","path":"secret/data/foo"}},"event_type":"kv-v2/data-write","plugin_info":{"mount_class":"secret","mount_accessor":"kv_5dc4d18e","mount_path":"secret/","plugin":"kv"}},"datacontentype":"application/cloudevents","time":"2023-09-12T15:19:49.394915-07:00"}
...
```
The Vault CLI support this endpoint via the `events subscribe` command, which will output a stream of
JSON for the requested event notifications (one line per event notification):
```shell-session
$ vault events subscribe kv-v2/data-write
{"id":"a3be9fb1-b514-519f-5b25-b6f144a8c1ce","source":"vault://mycluster","specversion":"1.0","type":"*","data":{"event":{"id":"a3be9fb1-b514-519f-5b25-b6f144a8c1ce","metadata":{"current_version":"1","data_path":"secret/data/foo","modified":"true","oldest_version":"0","operation":"data-write","path":"secret/data/foo"}},"event_type":"kv-v2/data-write","plugin_info":{"mount_class":"secret","mount_accessor":"kv_5dc4d18e","mount_path":"secret/","plugin":"kv"}},"datacontentype":"application/cloudevents","time":"2023-09-12T15:19:49.394915-07:00"}
...
```
## Policies
To subscribe to an event notification, you must have the following policy grants:
1. `read` capability on `/v1/sys/events/subscribe/{eventType}`, where `{eventType}` is the event type that will be
subscribed to. The path may contain wildcards.
An example blanket policy is:
```hcl
path "sys/events/subscribe/*" {
capabilities = ["read"]
}
```
2. `list` and `subscribe` capabilities on the *path of the secret* for events
related to secrets. The policy must also provide a `subscribe_event_types`
entry with the specific event notifications subscribers are allowed to use. For example,
to receive event notifications related to the KV secrets engine path,
`secret/my-data`, a valid policy would be:
```hcl
path "secret/my-data" {
capabilities = ["list", "subscribe"]
subscribe_event_types = ["*"]
}
```
Vault continuously evaluates policies for WebSocket subscriptions and
caches the results for a short period of time to improve performance.
As a result, event notifications **may** still be sent for a few minutes after a token is
revoked or a policy is deleted.
## Supported versions
| Version | Support |
|---------|---------------------------------------------|
| <= 1.12 | Not supported |
| 1.13 | Supported; **disabled** by default |
| 1.14 | Supported; **disabled** by default |
| 1.15 | Supported (beta); **enabled** by default |
| 1.16+ | Generally available; **enabled** by default |
For versions where event notifications are disabled by default, you can enable the
functionality with the `events.alpha1`
[experiment option](/vault/docs/configuration#experiments) in your Vault
configuration or from the command line with the `-experiments` flag. For example:
```shell-session
$ vault server -experiment events.alpha1
``` | vault | layout docs page title Event Notications description Event notifications allow Vault and plugins to exchange arbitrary activity data within Vault and with external subscribers via WebSockets Event Notifications include alerts enterprise only mdx Event notifications are arbitrary non secret data that can be exchanged between producers Vault and plugins and subscribers Vault components and external users via the API Event types This information will probably be migrated to the plugin pages eventually Note title Note Event types without the data path metadata field require a root token in order to be consumed from the v1 sys events subscribe eventType API endpoint Note Internal components of Vault as well as external plugins can generate event notifications These are published to event types sometimes called topics in other event systems All event notifications of a specific event type will have the same format for their additional metadata field The following event types are currently generated by Vault and its builtin plugins automatically Plugin Event Type Metadata Vault version database database config delete modified operation path name 1 16 database database config write modified operation path name 1 16 database database creds create modified operation path name 1 16 database database reload modified operation path plugin name 1 16 database database reset modified operation path name 1 16 database database role create modified operation path name 1 16 database database role delete modified operation path name 1 16 database database role update modified operation path name 1 16 database database root rotate fail modified operation path name 1 16 database database root rotate modified operation path name 1 16 database database rotate fail modified operation path name 1 16 database database rotate modified operation path name 1 16 database database static creds create fail modified operation path name 1 16 database database static creds create modified operation path name 1 16 database database static role create modified operation path name 1 16 database database static role delete modified operation path name 1 16 database database static role update modified operation path name 1 16 kv kv v1 delete modified operation path 1 13 kv kv v1 write data path modified operation path 1 13 kv kv v2 config write data path modified operation path 1 13 kv kv v2 data delete modified operation path 1 13 kv kv v2 data patch data path modified operation path 1 13 kv kv v2 data write data path modified operation path 1 13 kv kv v2 delete modified operation path 1 13 kv kv v2 destroy modified operation path 1 13 kv kv v2 metadata delete modified operation path 1 13 kv kv v2 metadata patch data path modified operation path 1 13 kv kv v2 metadata write data path modified operation path 1 13 kv kv v2 undelete data path modified operation path 1 13 Event notifications format Event notifications may be formatted in protobuf binary format or as JSON See EventReceived in sdk logical event proto https github com hashicorp vault blob main sdk logical event proto in the relevant Vault version for the protobuf schema When formatted as JSON the event notification conforms to the CloudEvents https cloudevents io specification id string CloudEvents unique identifier for the event notification The id is unique for each event notification and event notifications with the same id represent the same event notification source string CloudEvents source which is set to vault followed by the Raft node ID or the hostname of the host that generated the event notification specversion string The CloudEvents specification version this conforms to type string CloudEvents type this event notification corresponds to which is currently always datacontenttype string CloudEvents content type of the event notification which is currently always application json time string ISO 8601 formatted timestamp for when the event notificcation was generated data object Vault specific data event Event contains the event notification that happened id string repeat of the id parameter metadata object arbitrary extra data customized for the event type event type string the event type that was published plugin info PluginInfo information about the plugin that generated the event if applicable mount class string the class of plugin e g secret auth mount accessor string the unique ID of the mounted plugin mount path string the path that the plugin is mounted at plugin string the name of the plugin e g kv Here is an example event notification in JSON format json id a3be9fb1 b514 519f 5b25 b6f144a8c1ce source vault mycluster specversion 1 0 type data event id a3be9fb1 b514 519f 5b25 b6f144a8c1ce metadata current version 1 data path secret data foo modified true oldest version 0 operation data write path secret data foo event type kv v2 data write plugin info mount class secret mount accessor kv 5dc4d18e mount path secret plugin kv datacontentype application cloudevents time 2023 09 12T15 19 49 394915 07 00 Subscribing to event notifications Note title Note For multi node Vault deployments Vault only accepts subscriptions on the active node If a client attempts to subscribe to events on a standby node Vault will respond with a redirect to the active node Vault uses the api addr vault docs configuration api addr of the active node s configuration to route the redirect Vault deployments with performance replication must subscribe to events on the primary performance cluster Vault ignores subscriptions made from secondary clusters Note Vault has an API endpoint v1 sys events subscribe eventType that allows users to subscribe to event notifications via a WebSocket stream This endpoint supports the standard authentication and authorization workflows used by other Vault endpoints The eventType parameter is a non empty string of what event type to subscribe to which may contain wildcards to subscribe to multiple event types e g kv v2 data By default the event notifications are delivered in protobuf binary format The endpoint can also format the data as JSON if the json query parameter is set to true shell session wscat H X Vault Token vault print token connect ws 127 0 0 1 8200 v1 sys events subscribe kv v2 data write json true id a3be9fb1 b514 519f 5b25 b6f144a8c1ce source vault mycluster specversion 1 0 type data event id a3be9fb1 b514 519f 5b25 b6f144a8c1ce metadata current version 1 data path secret data foo modified true oldest version 0 operation data write path secret data foo event type kv v2 data write plugin info mount class secret mount accessor kv 5dc4d18e mount path secret plugin kv datacontentype application cloudevents time 2023 09 12T15 19 49 394915 07 00 The Vault CLI support this endpoint via the events subscribe command which will output a stream of JSON for the requested event notifications one line per event notification shell session vault events subscribe kv v2 data write id a3be9fb1 b514 519f 5b25 b6f144a8c1ce source vault mycluster specversion 1 0 type data event id a3be9fb1 b514 519f 5b25 b6f144a8c1ce metadata current version 1 data path secret data foo modified true oldest version 0 operation data write path secret data foo event type kv v2 data write plugin info mount class secret mount accessor kv 5dc4d18e mount path secret plugin kv datacontentype application cloudevents time 2023 09 12T15 19 49 394915 07 00 Policies To subscribe to an event notification you must have the following policy grants 1 read capability on v1 sys events subscribe eventType where eventType is the event type that will be subscribed to The path may contain wildcards An example blanket policy is hcl path sys events subscribe capabilities read 2 list and subscribe capabilities on the path of the secret for events related to secrets The policy must also provide a subscribe event types entry with the specific event notifications subscribers are allowed to use For example to receive event notifications related to the KV secrets engine path secret my data a valid policy would be hcl path secret my data capabilities list subscribe subscribe event types Vault continuously evaluates policies for WebSocket subscriptions and caches the results for a short period of time to improve performance As a result event notifications may still be sent for a few minutes after a token is revoked or a policy is deleted Supported versions Version Support 1 12 Not supported 1 13 Supported disabled by default 1 14 Supported disabled by default 1 15 Supported beta enabled by default 1 16 Generally available enabled by default For versions where event notifications are disabled by default you can enable the functionality with the events alpha1 experiment option vault docs configuration experiments in your Vault configuration or from the command line with the experiments flag For example shell session vault server experiment events alpha1 |
vault Secure external data with Vault transit tokenization and transforms Secure external data with Vault page title Secure external data with Vault layout docs Not all personally identifiable information PII lives in Vault For example | ---
layout: docs
page_title: Secure external data with Vault
description: >-
Secure external data with Vault transit, tokenization, and transforms.
---
# Secure external data with Vault
Not all personally identifiable information (PII) lives in Vault. For example,
you may need to store credit card numbers, patient IDs, or social security
numbers in an external databases.
It can be difficult to secure sensitive data outside Vault and balance the
need for applications to access the data efficiently while adhering to
stringent security standards and protocols. Vault helps you secure external data
with **transit encryption**, **tokenization**, and **transforms**.
Transform consists of three modes, called _transformations_. Format Preserving
Encryption (**FPE**) for encrypting and decrypting values while retaining their
formats. **Masking** for replacing sensitive information with masking
characters. And **Tokenization** which replaces sensitive information with
mathematically unrelated tokens.

## Encrypt sensitive data with Vault transit
Encrypting sensitive data is an obvious solution for protecting sensitive data.
But independently implementing robust data encryption can be complex and
expensive. The Vault transit plugin encrypts data, returns the resulting
ciphertext, and manages the associated encryption keys. Your application only
ever persists the ciphertext and never deals directly with the encryption key.
For example, the following diagram shows a credit card number going into the
transit system and returning to persistent storage as encrypted data.
<ImageConfig hideBorder>

</ImageConfig>
The tradeoff with encryption is that the ciphertext is often much longer than
the original data. Additionally, the ciphertext can contain characters that
are not allowed in the original data, which may cause your system to reject the
encrypted data. As a result, you may modify your database schema or adjust your
data validation system to avoid rejecting the encrypted data.
## Tokenize sensitive data
Tokenization safeguards sensitive information by randomly generating a token
with a one-way cryptographic hash function. Rather than storing the data
directly, the system saves the token to persistent storage and maintains a
mapping between the original data and the token.
<ImageConfig hideBorder>

</ImageConfig>
Combining effective tokenization and a robust random number generator ensures
protected data security regardless of where the data lives. However, even if
tokenization hides the format of the data, it can be a leaky abstraction if the
final token ends up the same length of the original data. Additionally, when the
token length matches the original length, format checkers may not realize the
data is tokenized and could reject the tokenized value as "invalid data".
Tokenization also presents scalability challenges because the hash table expands
any time the system stores a new tokenized value. A rapidly expanding hash table
affects storage and performance as the speed of hash searches declines.
Maintaining cryptographic data security with tokenization is a complex task that
requires protection of the tokenized data **and** the hash table to ensure data
integrity and security.
## Obscure sensitive data with Vault transform
<EnterpriseAlert>
The Vault transform plugin requires a Vault Enterprise license with Advanced
Data Protection (ADP).
</EnterpriseAlert>
We recommend using the Vault transform plugin for securing external, sensitive
data with Vault. The transform plugin supports three transformations:
- format preserving encryption
- data masking
- tokenization
In addition to providing the option of data masking, Vault transform simplifies
some of the complexities with stand-alone encryption and tokenization.
### Format preserving encryption
Format preserving encryption (FPE) is a two-way transformation that encrypts
external data while maintaining the original format and length. For example,
transforming a credit card number to an encoded ciphertext made of 16 numbers.
<ImageConfig hideBorder>

</ImageConfig>
Unlike stand-alone encryption, FPE maintains the original length and data
structure for encoded data so the transformed data works with your existing
database schema and validation systems. And, unlike tokenization, FPE preserves
the original structure without the risk of leaky abstraction.
<Note title="FPE is secure">
Vault uses
the [FF3-1](https://csrc.nist.gov/publications/detail/sp/800-38g/rev-1/draft) algorithm
to ensure the security of the encoded ciphertexts. The National Institute of
Standards and Technology (NIST) vets, updates, and tests the FF3-1 algorithm
to protect against specific types of attacks and potential future threats from
supercomputers.
</Note>
In addition to providing built-in transformation templates for common data like
credit card numbers and US social security numbers, format preserving encryptions
support custom transformation templates. You can use regular expressions to
specify the values you want to transform and enforce a schema on the encoded
value.
FPE transformation is also stateless, which means that Vault does not store the
protected secret. Instead, Vault protects the encryption key needed to decrypt
the ciphertext. By only storing the information needed to decrypt the
ciphertext, Vault provides maximum performance for encoding and decoding
operations while also minimizing exposure risk for the data.
### Data masking
Data masking is a one-way transformation that replaces characters on the input
value with a predefined translation character.
<Warning title="Use with caution">
Masking is non-reversible. We do not recommend masking for situations where
you need to retrieve and decode the original value.
</Warning>
Data masking is a good solution when you need to show or print sensitive data
without full readability. For example, masking a a bank account number in an
online banking portal to prevent potential security breaches from bad actors who
might be observing the screen.
### Tokenization
Unlike standalone tokenization, tokenization with Vault transform is a two-way,
random encoding that satisfies the PCI-DSS requirement for data irreversibility
while still allowing you to decode tokens back to their original plaintext.
To support token decoding, Vault secures a cryptographic mapping of tokens and
plaintext values in internal storage. Even if an attacker steals the underlying
transformation key and mapping values from Vault, tokenization of the data
prevents the attacker from recovering the original plaintext.
<ImageConfig hideBorder>

</ImageConfig>
Vault transform creates a new key for each tokenization transformation, which
helps ensure a strong cryptographic distinction between different tokenization
use cases. For example, a credit card processor may want to distinguish between
the same credit card number used by different merchants without having to decode
the token.
Tokenization transform also supports automatic key rotation based on a
configurable time interval and minimum key version. Each configured tokenization
transformation keeps a set of versioned keys. When a key rotates, older key
versions, within the configured age limit, are still available for decoding
tokens generated in the past. Vault cannot decode generated tokens with keys
below the minimum key version.
<Highlight title="Convergent tokenization">
By default, tokenization produces a unique token for every encode operation.
Vault transform supports **convergent tokenization**, which lets you use the
same encoded value for a given input.
Convergent tokenization lets you perform statistical analysis of the tokens
in your system **without decoding the token**. For example, counting the
entries for a given token, querying relationships between a token and other
fields in your database, and relating information that is tokenized across
multiple systems.
</Highlight>
### Performance considerations with Vault transform
Tokenization transformation is stateful, which means the encode operation must
perform writes to storage on the primary node of your Vault cluster. As a
result, any storage performance limits on the primary node also limits
scalability of the encode operation.
In comparison, neither Vault transit encryption nor FPE transformation write to
storage, and both can be horizontally scaled using performance standby nodes.
For high-performance use cases, we recommend that you configure Vault to store
the token mapping in an external database. External stores can achieve a much
higher performance scale and reduce the load on the internal storage for your
Vault installation.
Vault currently supports the following external storage systems:
- PostgreSQL
- MySQL
- MSSQL.
For more information on external storage, review the
[Tokenization transform storage](/vault/docs/secrets/transform/tokenization#storage)
documentation.
### Learn more about Vault transform
- [Transform secrets engine tutorial](/vault/tutorials/adp/transform)
- [Transform tokenization overview](/vault/docs/secrets/transform/tokenization)
- [Encrypt data with Transform tutorial](/vault/tutorials/adp/transform-code-example)
- [Tokenize data with Transform tutorial](/vault/tutorials/adp/tokenization)
- [Transform secrets engine API](/vault/api-docs/secret/transform | vault | layout docs page title Secure external data with Vault description Secure external data with Vault transit tokenization and transforms Secure external data with Vault Not all personally identifiable information PII lives in Vault For example you may need to store credit card numbers patient IDs or social security numbers in an external databases It can be difficult to secure sensitive data outside Vault and balance the need for applications to access the data efficiently while adhering to stringent security standards and protocols Vault helps you secure external data with transit encryption tokenization and transforms Transform consists of three modes called transformations Format Preserving Encryption FPE for encrypting and decrypting values while retaining their formats Masking for replacing sensitive information with masking characters And Tokenization which replaces sensitive information with mathematically unrelated tokens Transit vs Transform img transit or transform png Encrypt sensitive data with Vault transit Encrypting sensitive data is an obvious solution for protecting sensitive data But independently implementing robust data encryption can be complex and expensive The Vault transit plugin encrypts data returns the resulting ciphertext and manages the associated encryption keys Your application only ever persists the ciphertext and never deals directly with the encryption key For example the following diagram shows a credit card number going into the transit system and returning to persistent storage as encrypted data ImageConfig hideBorder Tokenization overview img vault transit secrets engine 1 png ImageConfig The tradeoff with encryption is that the ciphertext is often much longer than the original data Additionally the ciphertext can contain characters that are not allowed in the original data which may cause your system to reject the encrypted data As a result you may modify your database schema or adjust your data validation system to avoid rejecting the encrypted data Tokenize sensitive data Tokenization safeguards sensitive information by randomly generating a token with a one way cryptographic hash function Rather than storing the data directly the system saves the token to persistent storage and maintains a mapping between the original data and the token ImageConfig hideBorder Tokenization overview img traditional tokenization png ImageConfig Combining effective tokenization and a robust random number generator ensures protected data security regardless of where the data lives However even if tokenization hides the format of the data it can be a leaky abstraction if the final token ends up the same length of the original data Additionally when the token length matches the original length format checkers may not realize the data is tokenized and could reject the tokenized value as invalid data Tokenization also presents scalability challenges because the hash table expands any time the system stores a new tokenized value A rapidly expanding hash table affects storage and performance as the speed of hash searches declines Maintaining cryptographic data security with tokenization is a complex task that requires protection of the tokenized data and the hash table to ensure data integrity and security Obscure sensitive data with Vault transform EnterpriseAlert The Vault transform plugin requires a Vault Enterprise license with Advanced Data Protection ADP EnterpriseAlert We recommend using the Vault transform plugin for securing external sensitive data with Vault The transform plugin supports three transformations format preserving encryption data masking tokenization In addition to providing the option of data masking Vault transform simplifies some of the complexities with stand alone encryption and tokenization Format preserving encryption Format preserving encryption FPE is a two way transformation that encrypts external data while maintaining the original format and length For example transforming a credit card number to an encoded ciphertext made of 16 numbers ImageConfig hideBorder Format preserving encryption img vault encoded text jpg ImageConfig Unlike stand alone encryption FPE maintains the original length and data structure for encoded data so the transformed data works with your existing database schema and validation systems And unlike tokenization FPE preserves the original structure without the risk of leaky abstraction Note title FPE is secure Vault uses the FF3 1 https csrc nist gov publications detail sp 800 38g rev 1 draft algorithm to ensure the security of the encoded ciphertexts The National Institute of Standards and Technology NIST vets updates and tests the FF3 1 algorithm to protect against specific types of attacks and potential future threats from supercomputers Note In addition to providing built in transformation templates for common data like credit card numbers and US social security numbers format preserving encryptions support custom transformation templates You can use regular expressions to specify the values you want to transform and enforce a schema on the encoded value FPE transformation is also stateless which means that Vault does not store the protected secret Instead Vault protects the encryption key needed to decrypt the ciphertext By only storing the information needed to decrypt the ciphertext Vault provides maximum performance for encoding and decoding operations while also minimizing exposure risk for the data Data masking Data masking is a one way transformation that replaces characters on the input value with a predefined translation character Warning title Use with caution Masking is non reversible We do not recommend masking for situations where you need to retrieve and decode the original value Warning Data masking is a good solution when you need to show or print sensitive data without full readability For example masking a a bank account number in an online banking portal to prevent potential security breaches from bad actors who might be observing the screen Tokenization Unlike standalone tokenization tokenization with Vault transform is a two way random encoding that satisfies the PCI DSS requirement for data irreversibility while still allowing you to decode tokens back to their original plaintext To support token decoding Vault secures a cryptographic mapping of tokens and plaintext values in internal storage Even if an attacker steals the underlying transformation key and mapping values from Vault tokenization of the data prevents the attacker from recovering the original plaintext ImageConfig hideBorder Tokenization img vault tokenization transformation 1 png ImageConfig Vault transform creates a new key for each tokenization transformation which helps ensure a strong cryptographic distinction between different tokenization use cases For example a credit card processor may want to distinguish between the same credit card number used by different merchants without having to decode the token Tokenization transform also supports automatic key rotation based on a configurable time interval and minimum key version Each configured tokenization transformation keeps a set of versioned keys When a key rotates older key versions within the configured age limit are still available for decoding tokens generated in the past Vault cannot decode generated tokens with keys below the minimum key version Highlight title Convergent tokenization By default tokenization produces a unique token for every encode operation Vault transform supports convergent tokenization which lets you use the same encoded value for a given input Convergent tokenization lets you perform statistical analysis of the tokens in your system without decoding the token For example counting the entries for a given token querying relationships between a token and other fields in your database and relating information that is tokenized across multiple systems Highlight Performance considerations with Vault transform Tokenization transformation is stateful which means the encode operation must perform writes to storage on the primary node of your Vault cluster As a result any storage performance limits on the primary node also limits scalability of the encode operation In comparison neither Vault transit encryption nor FPE transformation write to storage and both can be horizontally scaled using performance standby nodes For high performance use cases we recommend that you configure Vault to store the token mapping in an external database External stores can achieve a much higher performance scale and reduce the load on the internal storage for your Vault installation Vault currently supports the following external storage systems PostgreSQL MySQL MSSQL For more information on external storage review the Tokenization transform storage vault docs secrets transform tokenization storage documentation Learn more about Vault transform Transform secrets engine tutorial vault tutorials adp transform Transform tokenization overview vault docs secrets transform tokenization Encrypt data with Transform tutorial vault tutorials adp transform code example Tokenize data with Transform tutorial vault tutorials adp tokenization Transform secrets engine API vault api docs secret transform |
vault page title Username Templating Username templating Username templating are used in some secret engines to allow operators to define layout docs how dynamic usernames are generated | ---
layout: docs
page_title: Username Templating
description: >-
Username templating are used in some secret engines to allow operators to define
how dynamic usernames are generated.
---
# Username templating
Some of the secrets engines that generate dynamic users for external systems provide the ability for Vault operators
to customize how usernames are generated for said external systems. This customization feature uses the
[Go template language](https://golang.org/pkg/text/template/). This page describes the basics of using these templates
for username generation but does not go into great depth of using the templating language for more advanced usages.
See the API documentation for the given secret engine to determine if it supports username templating and for more
details on using it with that engine.
~> When customizing how usernames are generated, take care to ensure you have enough randomness to ensure uniqueness
otherwise multiple calls to create the credentials may interfere with each other.
In addition to the functionality built into the Go template language, a number of additional functions are available:
## Available functions
### String/Character manipulation
`lowercase` - Lowercases the input value.<br/>
**Example**: ``
`replace` - Find/replace on the input value.<br/>
**Example**: ``
`truncate` - truncates the input value to the specified number of characters.<br/>
**Example**: ``
`truncate_sha256` - Truncates the input value to the specified number of characters. The last 8 characters of the
new value will be replace by the first 8 characters of the SHA256 hash of the truncated characters.<br/>
**Example**: ``. If `FieldName` is `abcdefghijklmnopqrstuvwxyz`, all characters after
the 12th (`l`) are removed and SHA256 hashed (`872808ffbf...1886ca6f20`).
The first 8 characters of the hash (`872808ff`) are then appended to the end of the first 12 characters from the
original value: `abcdefghijkl872808ff`.
`uppercase` - Uppercases the input value.<br/>
**Example**: ``
### Generating values
`random` - Generates a random string from lowercase letters, uppercase letters, and numbers. Must include a
number indicating how many characters to generate.<br/>
**Example**: `` generates 20 random characters
`timestamp` - The current time. Must provide a formatting string based on Go’s [time package](https://golang.org/pkg/time/).<br/>
**Example**: ``
`unix_time` - The current unix timestamp (number of seconds since Jan 1 1970).<br/>
**Example**: ``
`unix_time_millis` - The current unix timestamp in milliseconds.<br/>
**Example**: ``
`uuid` - Generates a random UUID.<br/>
**Example**: ``
### Hashing
`base64` - Base64 encodes the input value.<br/>
**Example**: ``
`sha256` - SHA256 hashes the input value.<br/>
**Example**: ``
## Examples
Each secret engine provides a different set of data to the template. Please see the associated secret engine's
documentation for details on what values are provided to the template. The examples below are modeled after the
[Database engine's](/vault/docs/secrets/databases) data, however the specific fields that are provided from a given engine
may differ from these examples. Additionally, the time is assumed to be 2009-02-13 11:31:30PM GMT
(unix timestamp: 1234567890) and random characters are the ordered english alphabet: `abcdefghijklmnopqrstuvwxyz`.
-> Note: The space between `` and the values/functions are optional.
For instance: `` is equivalent to ``
| Field name | Value |
| ------------- | ------------------------- |
| `DisplayName` | `token-with-display-name` |
| `RoleName` | `my_custom_database_role` |
To reference either of these fields, a `.` must be put in front of the field name: ``. Custom functions
do not include a `.` in front of them: ``.
### Basic example
**Template**:
```
_
```
**Username**:
```
token-with-display-name_my_custom_database_role
```
This is a basic example that references the two fields that are provided to the template. In simplest terms, this is
a simple string substitution.
~> This example does not have any randomness and should not be used when generating dynamic usernames. The purpose is to
demonstrate referencing data within the Go template language.
### Custom functions
**Template**:
```
FOO___
```
**Username**:
```
FOO_TOKEN_WITH_DISPLAY_NAME_MY_CUSTOM_DATABASE_ROLE_2009_02_13T11_31_30Z_0700
```
`` - Replaces all dashes with underscores and then uppercases the display name.<br/>
`` - Replaces all dashes with underscores and then uppercases the role name.<br/>
`` - Generates the current timestamp using the provided format and
replaces all dashes with underscores.
### Truncating to maximum length
**Template**:
```
```
**Username**:
```
v_token-wi_my_custo_abcdefghijklmnopqrst_1234
```
`.DisplayName | truncate 8` truncates the display name to 8 characters (`token-wi`).<br/>
`.RoleName | truncate 8` truncates the role name to 8 characters (`my_custo`).<br/>
`random 20` generates 20 random characters `abcdefghijklmnopqrst`.<br/>
`unix_time` generates the current timestamp as the number of seconds since January 1, 1970 (`1234567890`).<br/>
Each of these values are passed to `printf "v_%s_%s_%s_%s"` which prepends them with `v_` and puts an underscore between
each field. This results in `v_token-wi_my_custo_abcdefghijklmnopqrst_1234567890`. This value is then passed to
`truncate 45` where the last 6 characters are removed which results in `v_token-wi_my_custo_abcdefghijklmnopqrst_1234`.
## Tutorial
Refer to the [Database secrets
engine](/vault/tutorials/db-credentials/database-secrets#define-a-username-template) for step-by-step instructions. | vault | layout docs page title Username Templating description Username templating are used in some secret engines to allow operators to define how dynamic usernames are generated Username templating Some of the secrets engines that generate dynamic users for external systems provide the ability for Vault operators to customize how usernames are generated for said external systems This customization feature uses the Go template language https golang org pkg text template This page describes the basics of using these templates for username generation but does not go into great depth of using the templating language for more advanced usages See the API documentation for the given secret engine to determine if it supports username templating and for more details on using it with that engine When customizing how usernames are generated take care to ensure you have enough randomness to ensure uniqueness otherwise multiple calls to create the credentials may interfere with each other In addition to the functionality built into the Go template language a number of additional functions are available Available functions String Character manipulation lowercase Lowercases the input value br Example replace Find replace on the input value br Example truncate truncates the input value to the specified number of characters br Example truncate sha256 Truncates the input value to the specified number of characters The last 8 characters of the new value will be replace by the first 8 characters of the SHA256 hash of the truncated characters br Example If FieldName is abcdefghijklmnopqrstuvwxyz all characters after the 12th l are removed and SHA256 hashed 872808ffbf 1886ca6f20 The first 8 characters of the hash 872808ff are then appended to the end of the first 12 characters from the original value abcdefghijkl872808ff uppercase Uppercases the input value br Example Generating values random Generates a random string from lowercase letters uppercase letters and numbers Must include a number indicating how many characters to generate br Example generates 20 random characters timestamp The current time Must provide a formatting string based on Go s time package https golang org pkg time br Example unix time The current unix timestamp number of seconds since Jan 1 1970 br Example unix time millis The current unix timestamp in milliseconds br Example uuid Generates a random UUID br Example Hashing base64 Base64 encodes the input value br Example sha256 SHA256 hashes the input value br Example Examples Each secret engine provides a different set of data to the template Please see the associated secret engine s documentation for details on what values are provided to the template The examples below are modeled after the Database engine s vault docs secrets databases data however the specific fields that are provided from a given engine may differ from these examples Additionally the time is assumed to be 2009 02 13 11 31 30PM GMT unix timestamp 1234567890 and random characters are the ordered english alphabet abcdefghijklmnopqrstuvwxyz Note The space between and the values functions are optional For instance is equivalent to Field name Value DisplayName token with display name RoleName my custom database role To reference either of these fields a must be put in front of the field name Custom functions do not include a in front of them Basic example Template Username token with display name my custom database role This is a basic example that references the two fields that are provided to the template In simplest terms this is a simple string substitution This example does not have any randomness and should not be used when generating dynamic usernames The purpose is to demonstrate referencing data within the Go template language Custom functions Template FOO Username FOO TOKEN WITH DISPLAY NAME MY CUSTOM DATABASE ROLE 2009 02 13T11 31 30Z 0700 Replaces all dashes with underscores and then uppercases the display name br Replaces all dashes with underscores and then uppercases the role name br Generates the current timestamp using the provided format and replaces all dashes with underscores Truncating to maximum length Template Username v token wi my custo abcdefghijklmnopqrst 1234 DisplayName truncate 8 truncates the display name to 8 characters token wi br RoleName truncate 8 truncates the role name to 8 characters my custo br random 20 generates 20 random characters abcdefghijklmnopqrst br unix time generates the current timestamp as the number of seconds since January 1 1970 1234567890 br Each of these values are passed to printf v s s s s which prepends them with v and puts an underscore between each field This results in v token wi my custo abcdefghijklmnopqrst 1234567890 This value is then passed to truncate 45 where the last 6 characters are removed which results in v token wi my custo abcdefghijklmnopqrst 1234 Tutorial Refer to the Database secrets engine vault tutorials db credentials database secrets define a username template for step by step instructions |
vault page title Tokens Warning heading Internal token structure is volatile Tokens are a core auth method in Vault Concepts and important features Tokens are opaque values so their structure is undocumented and subject to change layout docs Tokens | ---
layout: docs
page_title: Tokens
description: Tokens are a core auth method in Vault. Concepts and important features.
---
# Tokens
<Warning heading="Internal token structure is volatile">
Tokens are opaque values so their structure is undocumented and subject to change.
Scripts and automations that rely on the internal structure of a token in scripts will break.
</Warning>
Tokens are the core method for _authentication_ within Vault. Tokens
can be used directly or [auth methods](/vault/docs/concepts/auth)
can be used to dynamically generate tokens based on external identities.
If you've gone through the getting started guide, you probably noticed that
`vault server -dev` (or `vault operator init` for a non-dev server) outputs an
initial "root token." This is the first method of authentication for Vault.
It is also the only auth method that cannot be disabled.
As stated in the [authentication concepts](/vault/docs/concepts/auth),
all external authentication mechanisms, such as GitHub, map down to dynamically
created tokens. These tokens have all the same properties as a normal manually
created token.
Within Vault, tokens map to information. The most important information mapped
to a token is a set of one or more attached
[policies](/vault/docs/concepts/policies). These policies control what the token
holder is allowed to do within Vault. Other mapped information includes
metadata that can be viewed and is added to the audit log, such as creation
time, last renewal time, and more.
Read on for a deeper dive into token concepts. See the
[tokens tutorial](/vault/tutorials/tokens/tokens)
for details on how these concepts play out in practice.
## Token types
There are three types of tokens. On this page `service` tokens and `batch` tokens are outlined,
while `recovery` tokens are covered separately in their [own page](/vault/docs/concepts/recovery-mode#recovery-tokens).
A section near the bottom of this page contains detailed information about their differences,
but it is useful to understand other token concepts first. The features in the following
sections all apply to service tokens, and their applicability to batch tokens is discussed
later.
### Token prefixes
Tokens have a specific prefix that indicates their type. As of Vault 1.10, this token
format was updated. The following table lists the prefix differences. This format
pattern and its change also apply for recovery tokens. After the prefix, a string of
24 or more randomly-generated characters is appended.
| Token Type | Vault 1.9.x or earlier | Vault 1.10 and later |
|-----------------|------------------------|----------------------|
| Service tokens | `s.<random>` | `hvs.<random>` |
| Batch tokens | `b.<random>` | `hvb.<random>` |
| Recovery tokens | `r.<random>` | `hvr.<random>` |
For example, a service token may look like `hvs.CvmS4c0DPTvHv5eJgXWMJg9r`.
## The token store
Often in documentation or in help channels, the "token store" is referenced.
This is the same as the [`token` authentication
backend](/vault/docs/auth/token). This is a special
backend in that it is responsible for creating and storing tokens, and cannot
be disabled. It is also the only auth method that has no login
capability -- all actions require existing authenticated tokens.
## Root tokens
Root tokens are tokens that have the `root` policy attached to them. Root
tokens can do anything in Vault. _Anything_. In addition, they are the only
type of token within Vault that can be set to never expire without any renewal
needed. As a result, it is purposefully hard to create root tokens; in fact
there are only three ways to create root tokens:
1. The initial root token generated at `vault operator init` time -- this token has no
expiration
1. By using another root token; a root token with an expiration cannot create a
root token that never expires
1. By using `vault operator generate-root` ([example](/vault/tutorials/operations/generate-root))
with the permission of a quorum of unseal key holders
Root tokens are useful in development but should be extremely carefully guarded
in production. In fact, the Vault team recommends that root tokens are only
used for just enough initial setup (usually, setting up auth methods
and policies necessary to allow administrators to acquire more limited tokens)
or in emergencies, and are revoked immediately after they are no longer needed.
If a new root token is needed, the `operator generate-root` command and associated
[API endpoint](/vault/api-docs/system/generate-root) can be used to generate one on-the-fly.
It is also good security practice for there to be multiple eyes on a terminal
whenever a root token is live. This way multiple people can verify as to the
tasks performed with the root token, and that the token was revoked immediately
after these tasks were completed.
## Token hierarchies and orphan tokens
Normally, when a token holder creates new tokens, these tokens will be created
as children of the original token; tokens they create will be children of them;
and so on. When a parent token is revoked, all of its child tokens -- and all
of their leases -- are revoked as well. This ensures that a user cannot escape
revocation by simply generating a never-ending tree of child tokens.
Often this behavior is not desired, so users with appropriate access can create
`orphan` tokens. These tokens have no parent -- they are the root of their own
token tree. These orphan tokens can be created:
1. Via `write` access to the `auth/token/create-orphan` endpoint
1. By having `sudo` or `root` access to the `auth/token/create`
and setting the `no_parent` parameter to `true`
1. Via token store roles
1. By logging in with any other (non-`token`) auth method
Users with appropriate permissions can also use the `auth/token/revoke-orphan`
endpoint, which revokes the given token but rather than revoke the rest of the
tree, it instead sets the tokens' immediate children to be orphans. Use with
caution!
## Token accessors
When tokens are created, a token accessor is also created and returned. This
accessor is a value that acts as a reference to a token and can only be used to
perform limited actions:
1. Look up a token's properties (not including the actual token ID)
1. Look up a token's capabilities on a path
1. Renew the token
1. Revoke the token
The token _making the call_, _not_ the token associated with the accessor, must
have appropriate permissions for these functions.
There are many useful workflows around token accessors. As an example, a
service that creates tokens on behalf of another service (such as the
[Nomad](https://www.nomadproject.io/) scheduler) can store the accessor
correlated with a particular job ID. When the job is complete, the accessor can
be used to instantly revoke the token given to the job and all of its leased
credentials, limiting the chance that a bad actor will discover and use them.
Audit devices can optionally be set to not obfuscate token accessors in audit
logs. This provides a way to quickly revoke tokens in case of an emergency.
However, it also means that the audit logs can be used to perform a larger-scale
denial of service attack.
Finally, the only way to "list tokens" is via the `auth/token/accessors`
command, which actually gives a list of token accessors. While this is still a
dangerous endpoint (since listing all of the accessors means that they can then
be used to revoke all tokens), it also provides a way to audit and revoke the
currently-active set of tokens.
## Token Time-To-Live, periodic tokens, and explicit max TTLs
Every non-root token has a time-to-live (TTL) associated with it, which is a
current period of validity since either the token's creation time or last
renewal time, whichever is more recent. (Root tokens may have a TTL associated,
but the TTL may also be 0, indicating a token that never expires). After the
current TTL is up, the token will no longer function -- it, and its associated
leases, are revoked.
If the token is renewable, Vault can be asked to extend the token validity
period using `vault token renew` or the appropriate renewal endpoint. At this
time, various factors come into play. What happens depends upon whether the
token is a periodic token (available for creation by `root`/`sudo` users, token
store roles, or some auth methods), has an explicit maximum TTL
attached, or neither.
### The general case
In the general case, where there is neither a period nor explicit maximum TTL
value set on the token, the token's lifetime since it was created will be
compared to the maximum TTL. This maximum TTL value is dynamically generated
and can change from renewal to renewal, so the value cannot be displayed when a
token's information is looked up. It is based on a combination of factors:
1. The system max TTL, which is 32 days but can be changed in Vault's
configuration file.
1. The max TTL set on a mount using [mount
tuning](/vault/api-docs/system/mounts). This value
is allowed to override the system max TTL -- it can be longer or shorter,
and if set this value will be respected.
1. A value suggested by the auth method that issued the token. This
might be configured on a per-role, per-group, or per-user basis. This value
is allowed to be less than the mount max TTL (or, if not set, the system max
TTL), but it is not allowed to be longer.
Note that the values in (2) and (3) may change at any given time, which is why
a final determination about the current allowed max TTL is made at renewal time
using the current values. It is also why it is important to always ensure that
the TTL returned from a renewal operation is within an allowed range; if this
value is not extending, likely the TTL of the token cannot be extended past its
current value and the client may want to reauthenticate and acquire a new
token. However, outside of direct operator interaction, Vault will never revoke
a token before the returned TTL has expired.
### Explicit max TTLs
Tokens can have an explicit max TTL set on them. This value becomes a hard
limit on the token's lifetime -- no matter what the values in (1), (2), and (3)
from the general case are, the token cannot live past this explicitly-set
value. This has an effect even when using periodic tokens to escape the normal
TTL mechanism.
### Periodic tokens
In some cases, having a token be revoked would be problematic -- for instance,
if a long-running service needs to maintain its SQL connection pool over a long
period of time. In this scenario, a periodic token can be used. Periodic tokens
can be created in a few ways:
1. By having `sudo` capability or a `root` token with the `auth/token/create`
endpoint
1. By using token store roles
1. By using an auth method that supports issuing these, such as
AppRole
At issue time, the TTL of a periodic token will be equal to the configured
period. At every renewal time, the TTL will be reset back to this configured
period, and as long as the token is successfully renewed within each of these
periods of time, it will never expire. Outside of `root` tokens, it is
currently the only way for a token in Vault to have an unlimited lifetime.
The idea behind periodic tokens is that it is easy for systems and services to
perform an action relatively frequently -- for instance, every two hours, or
even every five minutes. Therefore, as long as a system is actively renewing
this token -- in other words, as long as the system is alive -- the system is
allowed to keep using the token and any associated leases. However, if the
system stops renewing within this period (for instance, if it was shut down),
the token will expire relatively quickly. It is good practice to keep this
period as short as possible, and generally speaking it is not useful for humans
to be given periodic tokens.
There are a few important things to know when using periodic tokens:
- When a periodic token is created via a token store role, the _current_ value
of the role's period setting will be used at renewal time
- A token with both a period and an explicit max TTL will act like a periodic
token but will be revoked when the explicit max TTL is reached
## CIDR-Bound tokens
Some tokens are able to be bound to CIDR(s) that restrict the range of client
IPs allowed to use them. These affect all tokens except for non-expiring root
tokens (those with a TTL of zero). If a root token has an expiration, it also
is affected by CIDR-binding.
## Token types in detail
There are currently two types of tokens.
### Service tokens
Service tokens are what users will generally think of as "normal" Vault tokens.
They support all features, such as renewal, revocation, creating child tokens,
and more. They are correspondingly heavyweight to create and track.
### Batch tokens
Batch tokens are encrypted blobs that carry enough information for them to
be used for Vault actions, but they require no storage on disk to track them.
As a result they are extremely lightweight and scalable, but lack most of the
flexibility and features of service tokens.
### Token type comparison
This reference chart describes the difference in behavior between service and
batch tokens.
| | Service Tokens | Batch Tokens |
| --------------------------------------------------- | ------------------------------------------------------: | ----------------------------------------------: |
| Can Be Root Tokens | Yes | No |
| Can Create Child Tokens | Yes | No |
| Can be Renewable | Yes | No |
| Manually Revocable | Yes | No |
| Can be Periodic | Yes | No |
| Can have Explicit Max TTL | Yes | No (always uses a fixed TTL) |
| Has Accessors | Yes | No |
| Has Cubbyhole | Yes | No |
| Revoked with Parent (if not orphan) | Yes | Stops Working |
| Dynamic Secrets Lease Assignment | Self | Parent (if not orphan) |
| Can be Used Across Performance Replication Clusters | No | Yes (if orphan) |
| Creation Scales with Performance Standby Node Count | No | Yes |
| Cost | Heavyweight; multiple storage writes per token creation | Lightweight; no storage cost for token creation |
### Service vs. batch token lease handling
#### Service tokens
Leases created by service tokens (including child tokens' leases) are tracked
along with the service token and revoked when the token expires.
#### Batch tokens
Leases created by batch tokens are constrained to the remaining TTL of the
batch tokens and, if the batch token is not an orphan, are tracked by the
parent. They are revoked when the batch token's TTL expires, or when the batch
token's parent is revoked (at which point the batch token is also denied access
to Vault).
As a corollary, batch tokens can be used across performance replication
clusters, but only if they are orphan, since non-orphan tokens will not be able
to ensure the validity of the parent token.
## Error Responses
When using a token that has been revoked, exceeded its TTL, or is an otherwise invalid value, Vault will respond
with a `403` response code error containing the following error messages: `invalid token` and `permission denied`.
When using a token with incorrect policy access, Vault will respond with a `403` response code error containing the error message
`permission denied`.
| vault | layout docs page title Tokens description Tokens are a core auth method in Vault Concepts and important features Tokens Warning heading Internal token structure is volatile Tokens are opaque values so their structure is undocumented and subject to change Scripts and automations that rely on the internal structure of a token in scripts will break Warning Tokens are the core method for authentication within Vault Tokens can be used directly or auth methods vault docs concepts auth can be used to dynamically generate tokens based on external identities If you ve gone through the getting started guide you probably noticed that vault server dev or vault operator init for a non dev server outputs an initial root token This is the first method of authentication for Vault It is also the only auth method that cannot be disabled As stated in the authentication concepts vault docs concepts auth all external authentication mechanisms such as GitHub map down to dynamically created tokens These tokens have all the same properties as a normal manually created token Within Vault tokens map to information The most important information mapped to a token is a set of one or more attached policies vault docs concepts policies These policies control what the token holder is allowed to do within Vault Other mapped information includes metadata that can be viewed and is added to the audit log such as creation time last renewal time and more Read on for a deeper dive into token concepts See the tokens tutorial vault tutorials tokens tokens for details on how these concepts play out in practice Token types There are three types of tokens On this page service tokens and batch tokens are outlined while recovery tokens are covered separately in their own page vault docs concepts recovery mode recovery tokens A section near the bottom of this page contains detailed information about their differences but it is useful to understand other token concepts first The features in the following sections all apply to service tokens and their applicability to batch tokens is discussed later Token prefixes Tokens have a specific prefix that indicates their type As of Vault 1 10 this token format was updated The following table lists the prefix differences This format pattern and its change also apply for recovery tokens After the prefix a string of 24 or more randomly generated characters is appended Token Type Vault 1 9 x or earlier Vault 1 10 and later Service tokens s random hvs random Batch tokens b random hvb random Recovery tokens r random hvr random For example a service token may look like hvs CvmS4c0DPTvHv5eJgXWMJg9r The token store Often in documentation or in help channels the token store is referenced This is the same as the token authentication backend vault docs auth token This is a special backend in that it is responsible for creating and storing tokens and cannot be disabled It is also the only auth method that has no login capability all actions require existing authenticated tokens Root tokens Root tokens are tokens that have the root policy attached to them Root tokens can do anything in Vault Anything In addition they are the only type of token within Vault that can be set to never expire without any renewal needed As a result it is purposefully hard to create root tokens in fact there are only three ways to create root tokens 1 The initial root token generated at vault operator init time this token has no expiration 1 By using another root token a root token with an expiration cannot create a root token that never expires 1 By using vault operator generate root example vault tutorials operations generate root with the permission of a quorum of unseal key holders Root tokens are useful in development but should be extremely carefully guarded in production In fact the Vault team recommends that root tokens are only used for just enough initial setup usually setting up auth methods and policies necessary to allow administrators to acquire more limited tokens or in emergencies and are revoked immediately after they are no longer needed If a new root token is needed the operator generate root command and associated API endpoint vault api docs system generate root can be used to generate one on the fly It is also good security practice for there to be multiple eyes on a terminal whenever a root token is live This way multiple people can verify as to the tasks performed with the root token and that the token was revoked immediately after these tasks were completed Token hierarchies and orphan tokens Normally when a token holder creates new tokens these tokens will be created as children of the original token tokens they create will be children of them and so on When a parent token is revoked all of its child tokens and all of their leases are revoked as well This ensures that a user cannot escape revocation by simply generating a never ending tree of child tokens Often this behavior is not desired so users with appropriate access can create orphan tokens These tokens have no parent they are the root of their own token tree These orphan tokens can be created 1 Via write access to the auth token create orphan endpoint 1 By having sudo or root access to the auth token create and setting the no parent parameter to true 1 Via token store roles 1 By logging in with any other non token auth method Users with appropriate permissions can also use the auth token revoke orphan endpoint which revokes the given token but rather than revoke the rest of the tree it instead sets the tokens immediate children to be orphans Use with caution Token accessors When tokens are created a token accessor is also created and returned This accessor is a value that acts as a reference to a token and can only be used to perform limited actions 1 Look up a token s properties not including the actual token ID 1 Look up a token s capabilities on a path 1 Renew the token 1 Revoke the token The token making the call not the token associated with the accessor must have appropriate permissions for these functions There are many useful workflows around token accessors As an example a service that creates tokens on behalf of another service such as the Nomad https www nomadproject io scheduler can store the accessor correlated with a particular job ID When the job is complete the accessor can be used to instantly revoke the token given to the job and all of its leased credentials limiting the chance that a bad actor will discover and use them Audit devices can optionally be set to not obfuscate token accessors in audit logs This provides a way to quickly revoke tokens in case of an emergency However it also means that the audit logs can be used to perform a larger scale denial of service attack Finally the only way to list tokens is via the auth token accessors command which actually gives a list of token accessors While this is still a dangerous endpoint since listing all of the accessors means that they can then be used to revoke all tokens it also provides a way to audit and revoke the currently active set of tokens Token Time To Live periodic tokens and explicit max TTLs Every non root token has a time to live TTL associated with it which is a current period of validity since either the token s creation time or last renewal time whichever is more recent Root tokens may have a TTL associated but the TTL may also be 0 indicating a token that never expires After the current TTL is up the token will no longer function it and its associated leases are revoked If the token is renewable Vault can be asked to extend the token validity period using vault token renew or the appropriate renewal endpoint At this time various factors come into play What happens depends upon whether the token is a periodic token available for creation by root sudo users token store roles or some auth methods has an explicit maximum TTL attached or neither The general case In the general case where there is neither a period nor explicit maximum TTL value set on the token the token s lifetime since it was created will be compared to the maximum TTL This maximum TTL value is dynamically generated and can change from renewal to renewal so the value cannot be displayed when a token s information is looked up It is based on a combination of factors 1 The system max TTL which is 32 days but can be changed in Vault s configuration file 1 The max TTL set on a mount using mount tuning vault api docs system mounts This value is allowed to override the system max TTL it can be longer or shorter and if set this value will be respected 1 A value suggested by the auth method that issued the token This might be configured on a per role per group or per user basis This value is allowed to be less than the mount max TTL or if not set the system max TTL but it is not allowed to be longer Note that the values in 2 and 3 may change at any given time which is why a final determination about the current allowed max TTL is made at renewal time using the current values It is also why it is important to always ensure that the TTL returned from a renewal operation is within an allowed range if this value is not extending likely the TTL of the token cannot be extended past its current value and the client may want to reauthenticate and acquire a new token However outside of direct operator interaction Vault will never revoke a token before the returned TTL has expired Explicit max TTLs Tokens can have an explicit max TTL set on them This value becomes a hard limit on the token s lifetime no matter what the values in 1 2 and 3 from the general case are the token cannot live past this explicitly set value This has an effect even when using periodic tokens to escape the normal TTL mechanism Periodic tokens In some cases having a token be revoked would be problematic for instance if a long running service needs to maintain its SQL connection pool over a long period of time In this scenario a periodic token can be used Periodic tokens can be created in a few ways 1 By having sudo capability or a root token with the auth token create endpoint 1 By using token store roles 1 By using an auth method that supports issuing these such as AppRole At issue time the TTL of a periodic token will be equal to the configured period At every renewal time the TTL will be reset back to this configured period and as long as the token is successfully renewed within each of these periods of time it will never expire Outside of root tokens it is currently the only way for a token in Vault to have an unlimited lifetime The idea behind periodic tokens is that it is easy for systems and services to perform an action relatively frequently for instance every two hours or even every five minutes Therefore as long as a system is actively renewing this token in other words as long as the system is alive the system is allowed to keep using the token and any associated leases However if the system stops renewing within this period for instance if it was shut down the token will expire relatively quickly It is good practice to keep this period as short as possible and generally speaking it is not useful for humans to be given periodic tokens There are a few important things to know when using periodic tokens When a periodic token is created via a token store role the current value of the role s period setting will be used at renewal time A token with both a period and an explicit max TTL will act like a periodic token but will be revoked when the explicit max TTL is reached CIDR Bound tokens Some tokens are able to be bound to CIDR s that restrict the range of client IPs allowed to use them These affect all tokens except for non expiring root tokens those with a TTL of zero If a root token has an expiration it also is affected by CIDR binding Token types in detail There are currently two types of tokens Service tokens Service tokens are what users will generally think of as normal Vault tokens They support all features such as renewal revocation creating child tokens and more They are correspondingly heavyweight to create and track Batch tokens Batch tokens are encrypted blobs that carry enough information for them to be used for Vault actions but they require no storage on disk to track them As a result they are extremely lightweight and scalable but lack most of the flexibility and features of service tokens Token type comparison This reference chart describes the difference in behavior between service and batch tokens Service Tokens Batch Tokens Can Be Root Tokens Yes No Can Create Child Tokens Yes No Can be Renewable Yes No Manually Revocable Yes No Can be Periodic Yes No Can have Explicit Max TTL Yes No always uses a fixed TTL Has Accessors Yes No Has Cubbyhole Yes No Revoked with Parent if not orphan Yes Stops Working Dynamic Secrets Lease Assignment Self Parent if not orphan Can be Used Across Performance Replication Clusters No Yes if orphan Creation Scales with Performance Standby Node Count No Yes Cost Heavyweight multiple storage writes per token creation Lightweight no storage cost for token creation Service vs batch token lease handling Service tokens Leases created by service tokens including child tokens leases are tracked along with the service token and revoked when the token expires Batch tokens Leases created by batch tokens are constrained to the remaining TTL of the batch tokens and if the batch token is not an orphan are tracked by the parent They are revoked when the batch token s TTL expires or when the batch token s parent is revoked at which point the batch token is also denied access to Vault As a corollary batch tokens can be used across performance replication clusters but only if they are orphan since non orphan tokens will not be able to ensure the validity of the parent token Error Responses When using a token that has been revoked exceeded its TTL or is an otherwise invalid value Vault will respond with a 403 response code error containing the following error messages invalid token and permission denied When using a token with incorrect policy access Vault will respond with a 403 response code error containing the error message permission denied |
vault Before performing any operation with Vault the connecting client must be Authentication authenticated layout docs page title Authentication | ---
layout: docs
page_title: Authentication
description: >-
Before performing any operation with Vault, the connecting client must be
authenticated.
---
# Authentication
Authentication in Vault is the process by which user or machine supplied
information is verified against an internal or external system. Vault supports
multiple [auth methods](/vault/docs/auth) including GitHub,
LDAP, AppRole, and more. Each auth method has a specific use case.
Before a client can interact with Vault, it must _authenticate_ against an
auth method. Upon authentication, a token is generated. This token is
conceptually similar to a session ID on a website. The token may have attached
policy, which is mapped at authentication time. This process is described in
detail in the [policies concepts](/vault/docs/concepts/policies) documentation.
## auth methods
Vault supports a number of auth methods. Some backends are targeted
toward users while others are targeted toward machines. Most authentication
backends must be enabled before use. To enable an auth method:
```shell-session
$ vault auth enable userpass -path=my-auth
```
This enables the "userpass" auth method at the path "my-auth". This
authentication will be accessible at the path "my-auth". Often you will see
authentications at the same path as their name, but this is not a requirement.
To learn more about this authentication, use the built-in `path-help` command:
```shell-session
$ vault path-help auth/my-auth
# ...
```
Vault supports multiple auth methods simultaneously, and you can even
mount the same type of auth method at different paths. Only one
authentication is required to gain access to Vault, and it is not currently
possible to force a user through multiple auth methods to gain
access, although some backends do support MFA.
## Tokens
There is an [entire page dedicated to tokens](/vault/docs/concepts/tokens),
but it is important to understand that authentication works by verifying
your identity and then generating a token to associate with that identity.
For example, even though you may authenticate using something like GitHub,
Vault generates a unique access token for you to use for future requests.
The CLI automatically attaches this token to requests, but if you're using
the API you'll have to do this manually.
This token given for authentication with any backend can also be used
with the full set of token commands, such as creating new sub-tokens,
revoking tokens, and renewing tokens. This is all covered on the
[token concepts page](/vault/docs/concepts/tokens).
## Authenticating
### Via the CLI
To authenticate with the CLI, `vault login` is used. This supports many
of the built-in auth methods. For example, with GitHub:
```shell-session
$ vault login -method=github token=<token>
...
```
After authenticating, you will be logged in. The CLI command will also
output your raw token. This token is used for revocation and renewal.
As the user logging in, the primary use case of the token is renewal,
covered below in the "Auth Leases" section.
To determine what variables are needed for an auth method,
supply the `-method` flag without any additional arguments and help
will be shown.
If you're using a method that isn't supported via the CLI, then the API
must be used.
### Via the API
API authentication is generally used for machine authentication. Each
auth method implements its own login endpoint. Use the `vault path-help`
mechanism to find the proper endpoint.
For example, the GitHub login endpoint is located at `auth/github/login`.
And to determine the arguments needed, `vault path-help auth/github/login` can
be used.
## Auth leases
Just like secrets, identities have
[leases](/vault/docs/concepts/lease) associated with them. This means that
you must reauthenticate after the given lease period to continue accessing
Vault.
To set the lease associated with an identity, reference the help for
the specific auth method in use. It is specific to each backend
how leasing is implemented.
And just like secrets, identities can be renewed without having to
completely reauthenticate. Just use `vault token renew <token>` with the
leased token associated with your identity to renew it.
## Code example
The following code snippet demonstrates how to renew auth tokens.
<CodeTabs heading="token renewal example">
<CodeBlockConfig lineNumbers>
```go
package main
import (
"context"
"fmt"
"log"
vault "github.com/hashicorp/vault/api"
auth "github.com/hashicorp/vault/api/auth/userpass"
)
// Once you've set the token for your Vault client, you will need to
// periodically renew its lease.
//
// A function like this should be run as a goroutine to avoid blocking.
//
// Production applications may also wish to be more tolerant of failures and
// retry rather than exiting.
//
// Additionally, enterprise Vault users should be aware that due to eventual
// consistency, the API may return unexpected errors when running Vault with
// performance standbys or performance replication, despite the client having
// a freshly renewed token. See https://developer.hashicorp.com/vault/docs/enterprise/consistency#vault-1-7-mitigations
// for several ways to mitigate this which are outside the scope of this code sample.
func renewToken(client *vault.Client) {
for {
vaultLoginResp, err := login(client)
if err != nil {
log.Fatalf("unable to authenticate to Vault: %v", err)
}
tokenErr := manageTokenLifecycle(client, vaultLoginResp)
if tokenErr != nil {
log.Fatalf("unable to start managing token lifecycle: %v", tokenErr)
}
}
}
// Starts token lifecycle management. Returns only fatal errors as errors,
// otherwise returns nil so we can attempt login again.
func manageTokenLifecycle(client *vault.Client, token *vault.Secret) error {
renew := token.Auth.Renewable // You may notice a different top-level field called Renewable. That one is used for dynamic secrets renewal, not token renewal.
if !renew {
log.Printf("Token is not configured to be renewable. Re-attempting login.")
return nil
}
watcher, err := client.NewLifetimeWatcher(&vault.LifetimeWatcherInput{
Secret: token,
Increment: 3600, // Learn more about this optional value in https://developer.hashicorp.com/vault/docs/concepts/lease#lease-durations-and-renewal
})
if err != nil {
return fmt.Errorf("unable to initialize new lifetime watcher for renewing auth token: %w", err)
}
go watcher.Start()
defer watcher.Stop()
for {
select {
// `DoneCh` will return if renewal fails, or if the remaining lease
// duration is under a built-in threshold and either renewing is not
// extending it or renewing is disabled. In any case, the caller
// needs to attempt to log in again.
case err := <-watcher.DoneCh():
if err != nil {
log.Printf("Failed to renew token: %v. Re-attempting login.", err)
return nil
}
// This occurs once the token has reached max TTL.
log.Printf("Token can no longer be renewed. Re-attempting login.")
return nil
// Successfully completed renewal
case renewal := <-watcher.RenewCh():
log.Printf("Successfully renewed: %#v", renewal)
}
}
}
func login(client *vault.Client) (*vault.Secret, error) {
// WARNING: A plaintext password like this is obviously insecure.
// See the hashicorp/vault-examples repo for full examples of how to securely
// log in to Vault using various auth methods. This function is just
// demonstrating the basic idea that a *vault.Secret is returned by
// the login call.
userpassAuth, err := auth.NewUserpassAuth("my-user", &auth.Password{FromString: "my-password"})
if err != nil {
return nil, fmt.Errorf("unable to initialize userpass auth method: %w", err)
}
authInfo, err := client.Auth().Login(context.TODO(), userpassAuth)
if err != nil {
return nil, fmt.Errorf("unable to login to userpass auth method: %w", err)
}
if authInfo == nil {
return nil, fmt.Errorf("no auth info was returned after login")
}
return authInfo, nil
}
```
</CodeBlockConfig>
</CodeTabs> | vault | layout docs page title Authentication description Before performing any operation with Vault the connecting client must be authenticated Authentication Authentication in Vault is the process by which user or machine supplied information is verified against an internal or external system Vault supports multiple auth methods vault docs auth including GitHub LDAP AppRole and more Each auth method has a specific use case Before a client can interact with Vault it must authenticate against an auth method Upon authentication a token is generated This token is conceptually similar to a session ID on a website The token may have attached policy which is mapped at authentication time This process is described in detail in the policies concepts vault docs concepts policies documentation auth methods Vault supports a number of auth methods Some backends are targeted toward users while others are targeted toward machines Most authentication backends must be enabled before use To enable an auth method shell session vault auth enable userpass path my auth This enables the userpass auth method at the path my auth This authentication will be accessible at the path my auth Often you will see authentications at the same path as their name but this is not a requirement To learn more about this authentication use the built in path help command shell session vault path help auth my auth Vault supports multiple auth methods simultaneously and you can even mount the same type of auth method at different paths Only one authentication is required to gain access to Vault and it is not currently possible to force a user through multiple auth methods to gain access although some backends do support MFA Tokens There is an entire page dedicated to tokens vault docs concepts tokens but it is important to understand that authentication works by verifying your identity and then generating a token to associate with that identity For example even though you may authenticate using something like GitHub Vault generates a unique access token for you to use for future requests The CLI automatically attaches this token to requests but if you re using the API you ll have to do this manually This token given for authentication with any backend can also be used with the full set of token commands such as creating new sub tokens revoking tokens and renewing tokens This is all covered on the token concepts page vault docs concepts tokens Authenticating Via the CLI To authenticate with the CLI vault login is used This supports many of the built in auth methods For example with GitHub shell session vault login method github token token After authenticating you will be logged in The CLI command will also output your raw token This token is used for revocation and renewal As the user logging in the primary use case of the token is renewal covered below in the Auth Leases section To determine what variables are needed for an auth method supply the method flag without any additional arguments and help will be shown If you re using a method that isn t supported via the CLI then the API must be used Via the API API authentication is generally used for machine authentication Each auth method implements its own login endpoint Use the vault path help mechanism to find the proper endpoint For example the GitHub login endpoint is located at auth github login And to determine the arguments needed vault path help auth github login can be used Auth leases Just like secrets identities have leases vault docs concepts lease associated with them This means that you must reauthenticate after the given lease period to continue accessing Vault To set the lease associated with an identity reference the help for the specific auth method in use It is specific to each backend how leasing is implemented And just like secrets identities can be renewed without having to completely reauthenticate Just use vault token renew token with the leased token associated with your identity to renew it Code example The following code snippet demonstrates how to renew auth tokens CodeTabs heading token renewal example CodeBlockConfig lineNumbers go package main import context fmt log vault github com hashicorp vault api auth github com hashicorp vault api auth userpass Once you ve set the token for your Vault client you will need to periodically renew its lease A function like this should be run as a goroutine to avoid blocking Production applications may also wish to be more tolerant of failures and retry rather than exiting Additionally enterprise Vault users should be aware that due to eventual consistency the API may return unexpected errors when running Vault with performance standbys or performance replication despite the client having a freshly renewed token See https developer hashicorp com vault docs enterprise consistency vault 1 7 mitigations for several ways to mitigate this which are outside the scope of this code sample func renewToken client vault Client for vaultLoginResp err login client if err nil log Fatalf unable to authenticate to Vault v err tokenErr manageTokenLifecycle client vaultLoginResp if tokenErr nil log Fatalf unable to start managing token lifecycle v tokenErr Starts token lifecycle management Returns only fatal errors as errors otherwise returns nil so we can attempt login again func manageTokenLifecycle client vault Client token vault Secret error renew token Auth Renewable You may notice a different top level field called Renewable That one is used for dynamic secrets renewal not token renewal if renew log Printf Token is not configured to be renewable Re attempting login return nil watcher err client NewLifetimeWatcher vault LifetimeWatcherInput Secret token Increment 3600 Learn more about this optional value in https developer hashicorp com vault docs concepts lease lease durations and renewal if err nil return fmt Errorf unable to initialize new lifetime watcher for renewing auth token w err go watcher Start defer watcher Stop for select DoneCh will return if renewal fails or if the remaining lease duration is under a built in threshold and either renewing is not extending it or renewing is disabled In any case the caller needs to attempt to log in again case err watcher DoneCh if err nil log Printf Failed to renew token v Re attempting login err return nil This occurs once the token has reached max TTL log Printf Token can no longer be renewed Re attempting login return nil Successfully completed renewal case renewal watcher RenewCh log Printf Successfully renewed v renewal func login client vault Client vault Secret error WARNING A plaintext password like this is obviously insecure See the hashicorp vault examples repo for full examples of how to securely log in to Vault using various auth methods This function is just demonstrating the basic idea that a vault Secret is returned by the login call userpassAuth err auth NewUserpassAuth my user auth Password FromString my password if err nil return nil fmt Errorf unable to initialize userpass auth method w err authInfo err client Auth Login context TODO userpassAuth if err nil return nil fmt Errorf unable to login to userpass auth method w err if authInfo nil return nil fmt Errorf no auth info was returned after login return authInfo nil CodeBlockConfig CodeTabs |
vault which parts of Vault a user can access Policies are how authorization is done in Vault allowing you to restrict layout docs Policies page title Policies | ---
layout: docs
page_title: Policies
description: >-
Policies are how authorization is done in Vault, allowing you to restrict
which parts of Vault a user can access.
---
# Policies
Everything in Vault is path-based, and policies are no exception. Policies
provide a declarative way to grant or forbid access to certain paths and
operations in Vault. This section discusses policy workflows and syntaxes.
Policies are **deny by default**, so an empty policy grants no permission in the
system.
## Policy-authorization workflow
Before a human or machine can gain access, an administrator must configure Vault
with an [auth method](/vault/docs/concepts/auth). Authentication is
the process by which human or machine-supplied information is verified against
an internal or external system.
Consider the following diagram, which illustrates the steps a security team
would take to configure Vault to authenticate using a corporate LDAP or
ActiveDirectory installation. Even though this example uses LDAP, the concept
applies to all auth methods.
[](/img/vault-policy-workflow.svg)
1. The security team configures Vault to connect to an auth method.
This configuration varies by auth method. In the case of LDAP, Vault
needs to know the address of the LDAP server and whether to connect using TLS.
It is important to note that Vault does not store a copy of the LDAP database -
Vault will delegate the authentication to the auth method.
1. The security team authors a policy (or uses an existing policy) which grants
access to paths in Vault. Policies are written in HCL in your editor of
preference and saved to disk.
1. The policy's contents are uploaded and stored in Vault and referenced by name.
You can think of the policy's name as a pointer or symlink to its set of rules.
1. Most importantly, the security team maps data in the auth method to a policy.
For example, the security team might create mappings like:
> Members of the OU group "dev" map to the Vault policy named "readonly-dev".
or
> Members of the OU group "ops" map to the Vault policies "admin" and "auditor".
Now Vault has an internal mapping between a backend authentication system and
internal policy. When a user authenticates to Vault, the actual authentication
is delegated to the auth method. As a user, the flow looks like:
[](/img/vault-auth-workflow.svg)
1. A user attempts to authenticate to Vault using their LDAP credentials,
providing Vault with their LDAP username and password.
1. Vault establishes a connection to LDAP and asks the LDAP server to verify the
given credentials. Assuming this is successful, the LDAP server returns the
information about the user, including the OU groups.
1. Vault maps the result from the LDAP server to policies inside Vault using the
mapping configured by the security team in the previous section. Vault then
generates a token and attaches the matching policies.
1. Vault returns the token to the user. This token has the correct policies
assigned, as dictated by the mapping configuration that was setup by the
security team in advance.
The user then uses this Vault token for future operations. If the user performs
the authentication steps again, they will get a _new_ token. The token will have
the same permissions, but the actual token will be different. Authenticating a
second time does not invalidate the original token.
## Policy syntax
Policies are written in [HCL][hcl] or JSON and describe which paths in Vault a
user or machine is allowed to access.
[hcl]: https://github.com/hashicorp/hcl
Here is a very simple policy which grants read capabilities to the [KVv1](/vault/api-docs/secret/kv/kv-v1) path
`"secret/foo"`:
```hcl
path "secret/foo" {
capabilities = ["read"]
}
```
When this policy is assigned to a token, the token can read from `"secret/foo"`.
However, the token cannot update or delete `"secret/foo"`, since the
capabilities do not allow it. Because policies are **deny by default**, the
token would have no other access in Vault.
Here is a more detailed policy, and it is documented inline:
```hcl
# This section grants all access on "secret/*". further restrictions can be
# applied to this broad policy, as shown below.
path "secret/*" {
capabilities = ["create", "read", "update", "patch", "delete", "list"]
}
# Even though we allowed secret/*, this line explicitly denies
# secret/super-secret. this takes precedence.
path "secret/super-secret" {
capabilities = ["deny"]
}
# Policies can also specify allowed, disallowed, and required parameters. here
# the key "secret/restricted" can only contain "foo" (any value) and "bar" (one
# of "zip" or "zap").
path "secret/restricted" {
capabilities = ["create"]
allowed_parameters = {
"foo" = []
"bar" = ["zip", "zap"]
}
}
```
Policies use path-based matching to test the set of capabilities against a
request. A policy `path` may specify an exact path to match, or it could specify
a glob pattern which instructs Vault to use a prefix match:
```hcl
# Permit reading only "secret/foo". an attached token cannot read "secret/food"
# or "secret/foo/bar".
path "secret/foo" {
capabilities = ["read"]
}
# Permit reading everything under "secret/bar". an attached token could read
# "secret/bar/zip", "secret/bar/zip/zap", but not "secret/bars/zip".
path "secret/bar/*" {
capabilities = ["read"]
}
# Permit reading everything prefixed with "zip-". an attached token could read
# "secret/zip-zap" or "secret/zip-zap/zong", but not "secret/zip/zap
path "secret/zip-*" {
capabilities = ["read"]
}
```
In addition, a `+` can be used to denote any number of characters bounded
within a single path segment (this appeared in Vault 1.1):
```hcl
# Permit reading the "teamb" path under any top-level path under secret/
path "secret/+/teamb" {
capabilities = ["read"]
}
# Permit reading secret/foo/bar/teamb, secret/bar/foo/teamb, etc.
path "secret/+/+/teamb" {
capabilities = ["read"]
}
```
Vault's architecture is similar to a filesystem. Every action in Vault has a
corresponding path and capability - even Vault's internal core configuration
endpoints live under the `"sys/"` path. Policies define access to these paths and
capabilities, which controls a token's access to credentials in Vault.
## Priority matching
~> **Note:** The policy rules that Vault applies are determined by the most-specific match
available, using the priority rules described below. This may be an exact match
or the longest-prefix match of a glob. If the same pattern appears in multiple
policies, we take the union of the capabilities. If different patterns appear in
the applicable policies, we take only the highest-priority match from those
policies.
This means if you define a policy for `"secret/foo*"`, the policy would
also match `"secret/foobar"`. Specifically, when there are potentially multiple
matching policy paths, `P1` and `P2`, the following matching criteria is applied:
1. If the first wildcard (`+`) or glob (`*`) occurs earlier in `P1`, `P1` is lower priority
1. If `P1` ends in `*` and `P2` doesn't, `P1` is lower priority
1. If `P1` has more `+` (wildcard) segments, `P1` is lower priority
1. If `P1` is shorter, it is lower priority
1. If `P1` is smaller lexicographically, it is lower priority
For example, given the two paths, `"secret/*"` and `"secret/+/+/foo/*"`, the first
wildcard appears in the same place, both end in `*` and the latter has two wildcard
segments while the former has zero. So we end at rule (3), and give `"secret/+/+/foo/*"`
_lower_ priority.
Another example utilizing Vault [namespaces](/vault/docs/enterprise/namespaces), given [nested](/vault/tutorials/enterprise/namespace-structure) namespaces `ns1/ns2/ns3` and two paths,
`"secret/*"` and `"ns1/ns2/ns3/secret/apps/*"` where `secret` is a mountpoint in namespace `ns3`. The first path is
defined in a policy inside/relative to namespace `ns3` while the second path is defined in a policy in the `root` namespace.
Both paths end in `*` but the first is shorter. So we end at rule (4), and give `"secret/*"` _lower_ priority.
!> **Informational:**The glob character referred to in this documentation is the asterisk (`*`).
It _is not a regular expression_ and is only supported **as the last character of the path**!
When providing `list` capability, it is important to note that since listing
always operates on a prefix, policies must operate on a prefix because Vault
will sanitize request paths to be prefixes.
### Capabilities
Each path must define one or more capabilities which provide fine-grained
control over permitted (or denied) operations. As shown in the examples above,
capabilities are always specified as a list of strings, even if there is only
one capability.
To determine the capabilities needed to perform a specific operation, the `-output-policy` flag can be added to the CLI subcommand. For an example, refer to the [Print Policy Requirements](/vault/docs/commands#print-policy-requirements) document section.
The list of capabilities include the following:
- `create` (`POST/PUT`) - Allows creating data at the given path. Very few
parts of Vault distinguish between `create` and `update`, so most operations
require both `create` and `update` capabilities. Parts of Vault that
provide such a distinction are noted in documentation.
- `read` (`GET`) - Allows reading the data at the given path.
- `update` (`POST/PUT`) - Allows changing the data at the given path. In most
parts of Vault, this implicitly includes the ability to create the initial
value at the path.
- `patch` (`PATCH`) - Allows partial updates to the data at a given path.
- `delete` (`DELETE`) - Allows deleting the data at the given path.
- `list` (`LIST`) - Allows listing values at the given path. Note that the
keys returned by a `list` operation are _not_ filtered by policies. Do not
encode sensitive information in key names. Not all backends support listing.
In the list above, the associated HTTP verbs are shown in parenthesis next to
the capability. When authoring policy, it is usually helpful to look at the HTTP
API documentation for the paths and HTTP verbs and map them back onto
capabilities. While the mapping is not strictly 1:1, they are often very
similarly matched.
In addition to the standard set, there are some capabilities that do not map to
HTTP verbs.
- `sudo` - Allows access to paths that are _root-protected_. Tokens are not
permitted to interact with these paths unless they have the `sudo`
capability (in addition to the other necessary capabilities for performing
an operation against that path, such as `read` or `delete`).
For example, modifying the audit log backends requires a token with `sudo`
privileges.
- `deny` - Disallows access. This always takes precedence regardless of any
other defined capabilities, including `sudo`.
- `subscribe` - Allows subscribing to [events](/vault/docs/concepts/events)
for the given path.
~> **Note:** Capabilities usually map to the HTTP verb, and not the underlying
action taken. This can be a common source of confusion. Generating database
credentials _creates_ database credentials, but the HTTP request is a GET which
corresponds to a `read` capability. Thus, to grant access to generate database
credentials, the policy would grant `read` access on the appropriate path.
## Templated policies
The policy syntax allows for doing variable replacement in some policy strings
with values available to the token. Currently `identity` information can be
injected, and currently the `path` keys in policies allow injection.
### Parameters
| Name | Description |
| :------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------ |
| `identity.entity.id` | The entity's ID |
| `identity.entity.name` | The entity's name |
| `identity.entity.metadata.<metadata key>` | Metadata associated with the entity for the given key |
| `identity.entity.aliases.<mount accessor>.id` | Entity alias ID for the given mount |
| `identity.entity.aliases.<mount accessor>.name` | Entity alias name for the given mount |
| `identity.entity.aliases.<mount accessor>.metadata.<metadata key>` | Metadata associated with the alias for the given mount and metadata key |
| `identity.entity.aliases.<mount accessor>.custom_metadata.<custom_metadata key>` | Custom metadata associated with the alias for the given mount and custom metadata key |
| `identity.groups.ids.<group id>.name` | The group name for the given group ID |
| `identity.groups.names.<group name>.id` | The group ID for the given group name |
| `identity.groups.ids.<group id>.metadata.<metadata key>` | Metadata associated with the group for the given key |
| `identity.groups.names.<group name>.metadata.<metadata key>` | Metadata associated with the group for the given key |
### Examples
The following policy creates a section of the KVv2 Secret Engine to a specific user
```hcl
path "secret/data//*" {
capabilities = ["create", "update", "patch", "read", "delete"]
}
path "secret/metadata//*" {
capabilities = ["list"]
}
```
If you wanted to create a shared section of KV that is associated with entities that are in a
group.
```hcl
# In the example below, the group ID maps a group and the path
path "secret/data/groups//*" {
capabilities = ["create", "update", "patch", "read", "delete"]
}
path "secret/metadata/groups//*" {
capabilities = ["list"]
}
```
~> **Note:** When developing templated policies, use IDs wherever possible. Each ID is
unique to the user, whereas names can change over time and can be reused. This
ensures that if a given user or group name is changed, the policy will be
mapped to the intended entity or group.
If you want to use the metadata associated with an authentication plugin in your
templates, you will need to get its _mount accessor_ and access it via the
`aliases` key.
You can get the mount accessor value using the following command:
```shellsession
$> vault auth list
Path Type Accessor Description
---- ---- -------- -----------
kubernetes/ kubernetes auth_kubernetes_xxxx n/a
token/ token auth_token_yyyy token based credentials
```
The following templated policy allow to read the path associated with the
Kubernetes service account namespace of the identity:
```hcl
path "secret/data//*" {
capabilities = ["read"]
}
```
## Fine-grained control
In addition to the standard set of capabilities, Vault offers finer-grained
control over permissions at a given path. The capabilities associated with a
path take precedence over permissions on parameters.
### Parameter constraints
!> **Note:** The use of [globs](/vault/docs/concepts/policies#policy-syntax) (`*`) may result in [surprising or unexpected behavior](#parameter-constraints-limitations).
~> **Note:** The `allowed_parameters`, `denied_parameters`, and `required_parameters` fields are not supported for policies used with the [version 2 kv secrets engine](/vault/docs/secrets/kv/kv-v2).
See the [API Specification](/vault/api-docs/secret/kv/kv-v2) for more information.
Policies can take into account HTTP request parameters to further
constrain requests, using the following options:
- `required_parameters` - A list of parameters that must be specified.
```hcl
# This requires the user to create "secret/profile" with a parameter/key named
# "name" and "id" where kv v1 is enabled at "secret/".
path "secret/profile" {
capabilities = ["create"]
required_parameters = ["name", "id"]
}
```
- `allowed_parameters` - A list of keys and values that are
permitted on the given path.
- Setting a parameter with a value of the empty list allows the parameter to
contain any value.
```hcl
# This allows the user to update the password parameter value set on any
# users configured for userpass auth method. The password value can be
# anything. However, the user cannot update other parameter values such as
# token_ttl.
path "auth/userpass/users/*" {
capabilities = ["update"]
allowed_parameters = {
"password" = []
}
}
```
-> **Usage example:** The [ACL Policy Path
Templating](/vault/tutorials/policies/policy-templating)
tutorial demonstrates the use of `allowed_parameters` to permit a user to
update the user's password when using the [userpass auth
method](/vault/docs/auth/userpass) to log in with Vault.
- Setting a parameter with a value of a populated list allows the parameter
to contain only those values.
```hcl
# This allows the user to create or update an encryption key for transit
# secrets engine enabled at "transit/". When you do, you can set the
# "auto_rotate_period" parameter value so that the key gets rotated.
# However, the rotation period must be "8h", "24h", or "5d". Any other value
# will result in an error.
path "transit/keys/*" {
capabilities = ["create", "update"]
allowed_parameters = {
"auto_rotate_period" = ["8h", "24h", "5d"]
}
}
```
- If any keys are specified, all non-specified parameters will be denied
unless the parameter `"*"` is set to an empty array, which will
allow all other parameters to be modified. Parameters with specific values
will still be restricted to those values.
```hcl
# When kv v1 secrets engine is enabled at "secret/", this allows the user to
# create "secret/foo" with a parameter named "bar". The parameter "bar" can
# only contain the values "zip" or "zap", but any other parameters may be
# created with any value.
path "secret/foo" {
capabilities = ["create"]
allowed_parameters = {
"bar" = ["zip", "zap"]
"*" = []
}
}
```
- `denied_parameters` - A list of keys and values that are not permitted on the given
path. Any values specified here take precedence over `allowed_parameters`.
- Setting a parameter with a value of the empty list denies any changes to
that parameter.
```hcl
# This allows the user to update the userpass auth method's user
# configurations (e.g., "password") but cannot update the "token_policies"
# and "policies" parameter values.
path "auth/userpass/users/*" {
capabilities = ["update"]
denied_parameters = {
"token_policies" = []
"policies" = []
}
}
```
- Setting a parameter with a value of a populated list denies any parameter
containing those values.
```hcl
# This allows the user to create or update token roles. However, the
# "allowed_policies" parameter value cannot be "admin", but the user can
# assign any other policies to the parameter.
path "auth/token/roles/*" {
capabilities = ["create", "update"]
denied_parameters = {
"allowed_policies" = ["admin"]
}
}
```
- Setting to `"*"` will deny any parameter.
```hcl
# This allows the user to create or update an encryption key for transit
# secrets engine enabled at "transit/". However, the user cannot set any of
# the configuration parameters. As a result, the created key will have all
# parameters set to default values.
path "transit/keys/*" {
capabilities = ["create", "update"]
denied_parameters = {
"*" = []
}
}
```
- If any parameters are specified, all non-specified parameters are allowed,
unless `allowed_parameters` is also set, in which case normal rules apply.
Parameter values also support prefix/suffix globbing. Globbing is enabled by
prepending or appending or prepending a splat (`*`) to the value:
```hcl
# Only allow a parameter named "bar" with a value starting with "foo-*".
path "secret/foo" {
capabilities = ["create"]
allowed_parameters = {
"bar" = ["foo-*"]
}
}
```
~> **Note:** the only value that can be used with the `*` parameter is `[]`.
#### Parameter constraints limitations
##### Default values
Evaluation of policies with `allowed_parameters`, `denied_parameters`, and `required_parameters` happens
without consideration of parameters' default values.
Given the following policy:
```hcl
# The "no_store" parameter cannot be false
path "secret/foo" {
capabilities = ["create"]
denied_parameters = {
"no_store" = [false, "false"]
}
}
```
The following operation will error, because "no_store" is set to false:
```shell-session
$ vault write secret/foo no_store=false value=bar
```
Whereas the following operation will succeed, even if the "no_store"
parameter must be a boolean, and it defaults to false:
```shell-session
# Succeeds because "no_store=false" isn't present in the parameters
$ vault write secret/foo value=bar
```
This is because the policy evaluator does not know what the default value is for
the "no_store" parameter. All it sees is that the denied parameter isn't present
in the command.
This can be resolved by requiring the "no_store" parameter in your policy:
```hcl
path "secret/foo" {
capabilities = ["create"]
required_parameters = ["no_store"]
denied_parameters = {
"no_store" = [false, "false"]
}
}
```
The following command, which previously succeeded, will now fail under the new policy
because there is no "no_store" parameter:
```shell-session
$ vault write secret/foo value=bar
```
##### Globbing
It's also important to note that the use of globbing may result in surprising
or unexpected behavior:
```hcl
# This allows the user to create, update, or patch "secret/foo" with a parameter
# named "bar". the values passed to parameter "bar" must start with "baz/"
# so values like "baz/quux" are fine. however, values like
# "baz/quux,wibble,wobble,wubble" would also be accepted. the API that
# underlies "secret/foo" might allow comma delimited values for the "bar"
# parameter, and if it did, specifying a value like
# "baz/quux,wibble,wobble,wubble" would result in 4 different values getting
# passed along. seeing values like "wibble" or "wobble" getting passed to
# "secret/foo" might surprise someone that expected the allowed_parameters
# constraint to only allow values starting with "baz/".
path "secret/foo" {
capabilities = ["create", "update", "patch"]
allowed_parameters = {
"bar" = ["baz/*"]
}
}
```
### Required response wrapping TTLs
These parameters can be used to set minimums/maximums on TTLs set by clients
when requesting that a response be
[wrapped](/vault/docs/concepts/response-wrapping), with a granularity of a
second. These use [duration format strings](/vault/docs/concepts/duration-format).
In practice, setting a minimum TTL of one second effectively makes response
wrapping mandatory for a particular path.
- `min_wrapping_ttl` - The minimum allowed TTL that clients can specify for a
wrapped response. In practice, setting a minimum TTL of one second
effectively makes response wrapping mandatory for a particular path. It can
also be used to ensure that the TTL is not too low, leading to end targets
being unable to unwrap before the token expires.
- `max_wrapping_ttl` - The maximum allowed TTL that clients can specify for a
wrapped response.
```hcl
# This effectively makes response wrapping mandatory for this path by setting min_wrapping_ttl to 1 second.
# This also sets this path's wrapped response maximum allowed TTL to 90 seconds.
path "auth/approle/role/my-role/secret-id" {
capabilities = ["create", "update"]
min_wrapping_ttl = "1s"
max_wrapping_ttl = "90s"
}
```
If both are specified, the minimum value must be less than the maximum. In
addition, if paths are merged from different stanzas, the lowest value
specified for each is the value that will result, in line with the idea of
keeping token lifetimes as short as possible.
## Built-in policies
Vault has two built-in policies: `default` and `root`. This section describes
the two built-in policies.
### Default policy
The `default` policy is a built-in Vault policy that cannot be removed. By
default, it is attached to all tokens, but may be explicitly excluded at token
creation time by supporting authentication methods.
The policy contains basic functionality such as the ability for the token to
look up data about itself and to use its cubbyhole data. However, Vault is not
prescriptive about its contents. It can be modified to suit your needs; Vault
will never overwrite your modifications. If you want to stay up-to-date with
the latest upstream version of the `default` policy, simply read the contents
of the policy from an up-to-date `dev` server, and write those contents into
your Vault's `default` policy.
To view all permissions granted by the default policy on your Vault
installation, run:
```shell-session
$ vault read sys/policy/default
```
To disable attachment of the default policy:
```shell-session
$ vault token create -no-default-policy
```
or via the API:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ..." \
--data '{"no_default_policy": "true"}' \
https://vault.hashicorp.rocks/v1/auth/token/create
```
### Root policy
The `root` policy is a built-in Vault policy that cannot be modified or removed.
Any user associated with this policy becomes a root user. A root user can do
_anything_ within Vault. As such, it is **highly recommended** that you revoke
any root tokens before running Vault in production.
When a Vault server is first initialized, there always exists one root user.
This user is used to do the initial configuration and setup of Vault. After
configured, the initial root token should be revoked and more strictly
controlled users and authentication should be used.
To revoke a root token, run:
```shell-session
$ vault token revoke "<token>"
```
or via the API:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ..." \
--data '{"token": "<token>"}' \
https://vault.hashicorp.rocks/v1/auth/token/revoke
```
For more information, please read:
- [Production Hardening](/vault/tutorials/operations/production-hardening)
- [Generating a Root Token](/vault/tutorials/operations/generate-root)
## Managing policies
Policies are authored (written) in your editor of choice. They can be authored
in HCL or JSON, and the syntax is described in detail above. Once saved,
policies must be uploaded to Vault before they can be used.
### Listing policies
To list all registered policies in Vault:
```shell-session
$ vault read sys/policy
```
or via the API:
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
https://vault.hashicorp.rocks/v1/sys/policy
```
~> **Note:** You may also see the CLI command `vault policies`. This is a convenience
wrapper around reading the sys endpoint directly. It provides the same
functionality but formats the output in a special manner.
### Creating policies
Policies may be created (uploaded) via the CLI or via the API. To create a new
policy in Vault:
```shell-session
$ vault policy write policy-name policy-file.hcl
```
or via the API:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ..." \
--data '{"policy":"path \"...\" {...} "}' \
https://vault.hashicorp.rocks/v1/sys/policy/policy-name
```
In both examples, the name of the policy is "policy-name". You can think of this
name as a pointer or symlink to the policy ACLs. Tokens are attached policies by
name, which are then mapped to the set of rules corresponding to that name.
### Updating policies
Existing policies may be updated to change permissions via the CLI or via the
API. To update an existing policy in Vault, follow the same steps as creating a
policy, but use an existing policy name:
```shell-session
$ vault write sys/policy/my-existing-policy [email protected]
```
or via the API:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ..." \
--data '{"policy":"path \"...\" {...} "}' \
https://vault.hashicorp.rocks/v1/sys/policy/my-existing-policy
```
### Deleting policies
Existing policies may be deleted via the CLI or API. To delete a policy:
```shell-session
$ vault delete sys/policy/policy-name
```
or via the API:
```shell-session
$ curl \
--request DELETE \
--header "X-Vault-Token: ..." \
https://vault.hashicorp.rocks/v1/sys/policy/policy-name
```
This is an idempotent operation. Vault will not return an error when deleting a
policy that does not exist.
## Associating policies
Vault can automatically associate a set of policies to a token based on an
authorization. This configuration varies significantly between authentication
backends. For simplicity, this example will use Vault's built-in userpass
auth method.
A Vault administrator or someone from the security team would create the user in
Vault with a list of associated policies:
```shell-session
$ vault write auth/userpass/users/sethvargo \
password="s3cr3t!" \
policies="dev-readonly,logs"
```
This creates an authentication mapping to the policy such that, when the user
authenticates successfully to Vault, they will be given a token which has the list
of policies attached.
The user wishing to authenticate would run
```shell-session
$ vault login -method="userpass" username="sethvargo"
Password (will be hidden): ...
```
If the provided information is correct, Vault will generate a token, assign the
list of configured policies to the token, and return that token to the
authenticated user.
## Root protected API endpoints
~> **Note:** Vault treats the HTTP POST and PUT verbs as equivalent, so for each mention
of POST in the table below, PUT may also be used. Vault uses the non-standard LIST HTTP
verb, but also allows list requests to be made using the GET verb along with `?list=true`
as a query parameter, so for each mention of LIST in the table above, GET with `?list=true`
may also be used.
The following paths requires a root token or `sudo` capability in the policy:
| Path | HTTP verb | Description |
| ------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------- | ------------------------------------------------------------------------------------------------------------------- |
| [auth/token/accessors](/vault/api-docs/auth/token#list-accessors) | LIST | List token accessors for all current Vault service tokens |
| [auth/token/create](/vault/api-docs/auth/token#create-token) | POST | Create a periodic or an orphan token (`period` or `no_parent`) option |
| [auth/token/revoke-orphan](/vault/api-docs/auth/token#revoke-token-and-orphan-children) | POST | Revoke a token but not its child tokens, which will be orphaned |
| [pki/root](/vault/api-docs/secret/pki#delete-all-issuers-and-keys) | DELETE | Delete the current CA key ([pki secrets engine](/vault/docs/secrets/pki)) |
| [pki/root/sign-self-issued](/vault/api-docs/secret/pki#sign-self-issued) | POST | Use the configured CA certificate to sign a self-issued certificate ([pki secrets engine](/vault/docs/secrets/pki)) |
| [sys/audit](/vault/api-docs/system/audit) | GET | List enabled audit devices |
| [sys/audit/:path](/vault/api-docs/system/audit) | POST, DELETE | Enable or remove an audit device |
| [sys/auth/:path](/vault/api-docs/system/auth) | GET, POST, DELETE | Manage the auth methods (enable, read, and delete) |
| [sys/auth/:path/tune](/vault/api-docs/system/auth#tune-auth-method) | GET, POST | Manage the auth methods (enable, read, delete, and tune) |
| [sys/config/auditing/request-headers](/vault/api-docs/system/config-auditing) | GET | List the request headers that are configured to be audited |
| [sys/config/auditing/request-headers/:name](/vault/api-docs/system/config-auditing) | GET, POST, DELETE | Manage the auditing headers (create, update, read and delete) |
| [sys/config/cors](/vault/api-docs/system/config-cors) | GET, POST, DELETE | Configure CORS setting |
| [sys/config/ui/headers](/vault/api-docs/system/config-ui) | GET, LIST | Configure the UI settings |
| [sys/config/ui/headers/:name](/vault/api-docs/system/config-ui#name) | POST, DELETE | Configure custom HTTP headers to be served with the UI |
| [sys/internal/inspect/router/:tag](/vault/api-docs/system/inspect/router) | GET | Inspect the internal components of Vault's router. `tag` must be one of root, uuid, accessor, or storage |
| [sys/leases/lookup/:prefix](/vault/api-docs/system/leases#list-leases) | LIST | List lease IDs |
| [sys/leases/revoke-force/:prefix](/vault/api-docs/system/leases#revoke-force) | POST | Revoke all secrets or tokens ignoring backend errors |
| [sys/leases/revoke-prefix/:prefix](/vault/api-docs/system/leases#revoke-prefix) | POST | Revoke all secrets generated under a given prefix |
| [sys/plugins/catalog/:type/:name](/vault/api-docs/system/plugins-catalog#register-plugin) | GET, POST, DELETE | Register a new plugin, or read/remove an existing plugin |
| [sys/raw:path](/vault/api-docs/system/raw) | GET, POST, DELETE | Used to access the raw underlying store in Vault |
| [sys/raw:prefix](/vault/api-docs/system/raw#list-raw) | GET, LIST | Returns a list keys for a given path prefix |
| [sys/remount](/vault/api-docs/system/remount) | POST | Moves an already-mounted backend to a new mount point |
| [sys/replication/reindex](/vault/api-docs/system/replication#reindex-replication) | POST | Reindex the local data storage |
| [sys/replication/performance/primary/secondary-token](/vault/api-docs/system/replication/replication-performance#generate-performance-secondary-token) | POST | Generate a performance secondary activation token |
| [sys/replication/dr/primary/secondary-token](/vault/api-docs/system/replication/replication-dr#generate-dr-secondary-token) | POST | Generate a DR secondary activation token |
| [sys/rotate](/vault/api-docs/system/rotate) | POST | Trigger a rotation of the backend encryption key |
| [sys/seal](/vault/api-docs/system/seal) | POST | Seals the Vault |
| [sys/step-down](/vault/api-docs/system/step-down) | POST | Forces a node to give up active status |
| [sys/storage/raft/snapshot-auto/config](/vault/api-docs/system/storage/raftautosnapshots#list-automated-snapshots-configs) | LIST | Lists named configurations |
| [sys/storage/raft/snapshot-auto/config/:name](/vault/api-docs/system/storage/raftautosnapshots) | GET, POST, DELETE | Creates or updates a named configuration |
### Tokens
Tokens have two sets of policies: identity policies, which are computed
based on the entity and its groups, and token policies, which are either defined
based on the login method or, in the case of explicit token creates via the API,
are an input to the token creation. What follows concerns token policies
exclusively: a token's identity policies cannot be controlled except by modifying
the underlying entities, groups, and group memberships.
Tokens are associated with their policies at creation time. For example:
```shell-session
$ vault token create -policy=dev-readonly -policy=logs
```
Normally the only policies that may be specified are those which are present
in the current token's (i.e. the new token's parent's) token policies.
However, root users can assign any policies.
There is no way to modify the policies associated with a token once the token
has been issued. The token must be revoked and a new one acquired to receive a
new set of policies.
However, the _contents_ of policies are parsed in real-time whenever the token is used.
As a result, if a policy is modified, the modified rules will be in force the
next time a token, with that policy attached, is used to make a call to Vault.
## UI policy requirements
@include 'ui/policy-requirements.mdx'
## Tutorial
Refer to the following tutorials for further learning:
- [Vault Policies](/vault/tutorials/policies/policies)
- [ACL Policy Path Templating](/vault/tutorials/policies/policy-templating) | vault | layout docs page title Policies description Policies are how authorization is done in Vault allowing you to restrict which parts of Vault a user can access Policies Everything in Vault is path based and policies are no exception Policies provide a declarative way to grant or forbid access to certain paths and operations in Vault This section discusses policy workflows and syntaxes Policies are deny by default so an empty policy grants no permission in the system Policy authorization workflow Before a human or machine can gain access an administrator must configure Vault with an auth method vault docs concepts auth Authentication is the process by which human or machine supplied information is verified against an internal or external system Consider the following diagram which illustrates the steps a security team would take to configure Vault to authenticate using a corporate LDAP or ActiveDirectory installation Even though this example uses LDAP the concept applies to all auth methods Vault Auth Workflow img vault policy workflow svg img vault policy workflow svg 1 The security team configures Vault to connect to an auth method This configuration varies by auth method In the case of LDAP Vault needs to know the address of the LDAP server and whether to connect using TLS It is important to note that Vault does not store a copy of the LDAP database Vault will delegate the authentication to the auth method 1 The security team authors a policy or uses an existing policy which grants access to paths in Vault Policies are written in HCL in your editor of preference and saved to disk 1 The policy s contents are uploaded and stored in Vault and referenced by name You can think of the policy s name as a pointer or symlink to its set of rules 1 Most importantly the security team maps data in the auth method to a policy For example the security team might create mappings like Members of the OU group dev map to the Vault policy named readonly dev or Members of the OU group ops map to the Vault policies admin and auditor Now Vault has an internal mapping between a backend authentication system and internal policy When a user authenticates to Vault the actual authentication is delegated to the auth method As a user the flow looks like Vault Auth Workflow img vault auth workflow svg img vault auth workflow svg 1 A user attempts to authenticate to Vault using their LDAP credentials providing Vault with their LDAP username and password 1 Vault establishes a connection to LDAP and asks the LDAP server to verify the given credentials Assuming this is successful the LDAP server returns the information about the user including the OU groups 1 Vault maps the result from the LDAP server to policies inside Vault using the mapping configured by the security team in the previous section Vault then generates a token and attaches the matching policies 1 Vault returns the token to the user This token has the correct policies assigned as dictated by the mapping configuration that was setup by the security team in advance The user then uses this Vault token for future operations If the user performs the authentication steps again they will get a new token The token will have the same permissions but the actual token will be different Authenticating a second time does not invalidate the original token Policy syntax Policies are written in HCL hcl or JSON and describe which paths in Vault a user or machine is allowed to access hcl https github com hashicorp hcl Here is a very simple policy which grants read capabilities to the KVv1 vault api docs secret kv kv v1 path secret foo hcl path secret foo capabilities read When this policy is assigned to a token the token can read from secret foo However the token cannot update or delete secret foo since the capabilities do not allow it Because policies are deny by default the token would have no other access in Vault Here is a more detailed policy and it is documented inline hcl This section grants all access on secret further restrictions can be applied to this broad policy as shown below path secret capabilities create read update patch delete list Even though we allowed secret this line explicitly denies secret super secret this takes precedence path secret super secret capabilities deny Policies can also specify allowed disallowed and required parameters here the key secret restricted can only contain foo any value and bar one of zip or zap path secret restricted capabilities create allowed parameters foo bar zip zap Policies use path based matching to test the set of capabilities against a request A policy path may specify an exact path to match or it could specify a glob pattern which instructs Vault to use a prefix match hcl Permit reading only secret foo an attached token cannot read secret food or secret foo bar path secret foo capabilities read Permit reading everything under secret bar an attached token could read secret bar zip secret bar zip zap but not secret bars zip path secret bar capabilities read Permit reading everything prefixed with zip an attached token could read secret zip zap or secret zip zap zong but not secret zip zap path secret zip capabilities read In addition a can be used to denote any number of characters bounded within a single path segment this appeared in Vault 1 1 hcl Permit reading the teamb path under any top level path under secret path secret teamb capabilities read Permit reading secret foo bar teamb secret bar foo teamb etc path secret teamb capabilities read Vault s architecture is similar to a filesystem Every action in Vault has a corresponding path and capability even Vault s internal core configuration endpoints live under the sys path Policies define access to these paths and capabilities which controls a token s access to credentials in Vault Priority matching Note The policy rules that Vault applies are determined by the most specific match available using the priority rules described below This may be an exact match or the longest prefix match of a glob If the same pattern appears in multiple policies we take the union of the capabilities If different patterns appear in the applicable policies we take only the highest priority match from those policies This means if you define a policy for secret foo the policy would also match secret foobar Specifically when there are potentially multiple matching policy paths P1 and P2 the following matching criteria is applied 1 If the first wildcard or glob occurs earlier in P1 P1 is lower priority 1 If P1 ends in and P2 doesn t P1 is lower priority 1 If P1 has more wildcard segments P1 is lower priority 1 If P1 is shorter it is lower priority 1 If P1 is smaller lexicographically it is lower priority For example given the two paths secret and secret foo the first wildcard appears in the same place both end in and the latter has two wildcard segments while the former has zero So we end at rule 3 and give secret foo lower priority Another example utilizing Vault namespaces vault docs enterprise namespaces given nested vault tutorials enterprise namespace structure namespaces ns1 ns2 ns3 and two paths secret and ns1 ns2 ns3 secret apps where secret is a mountpoint in namespace ns3 The first path is defined in a policy inside relative to namespace ns3 while the second path is defined in a policy in the root namespace Both paths end in but the first is shorter So we end at rule 4 and give secret lower priority Informational The glob character referred to in this documentation is the asterisk It is not a regular expression and is only supported as the last character of the path When providing list capability it is important to note that since listing always operates on a prefix policies must operate on a prefix because Vault will sanitize request paths to be prefixes Capabilities Each path must define one or more capabilities which provide fine grained control over permitted or denied operations As shown in the examples above capabilities are always specified as a list of strings even if there is only one capability To determine the capabilities needed to perform a specific operation the output policy flag can be added to the CLI subcommand For an example refer to the Print Policy Requirements vault docs commands print policy requirements document section The list of capabilities include the following create POST PUT Allows creating data at the given path Very few parts of Vault distinguish between create and update so most operations require both create and update capabilities Parts of Vault that provide such a distinction are noted in documentation read GET Allows reading the data at the given path update POST PUT Allows changing the data at the given path In most parts of Vault this implicitly includes the ability to create the initial value at the path patch PATCH Allows partial updates to the data at a given path delete DELETE Allows deleting the data at the given path list LIST Allows listing values at the given path Note that the keys returned by a list operation are not filtered by policies Do not encode sensitive information in key names Not all backends support listing In the list above the associated HTTP verbs are shown in parenthesis next to the capability When authoring policy it is usually helpful to look at the HTTP API documentation for the paths and HTTP verbs and map them back onto capabilities While the mapping is not strictly 1 1 they are often very similarly matched In addition to the standard set there are some capabilities that do not map to HTTP verbs sudo Allows access to paths that are root protected Tokens are not permitted to interact with these paths unless they have the sudo capability in addition to the other necessary capabilities for performing an operation against that path such as read or delete For example modifying the audit log backends requires a token with sudo privileges deny Disallows access This always takes precedence regardless of any other defined capabilities including sudo subscribe Allows subscribing to events vault docs concepts events for the given path Note Capabilities usually map to the HTTP verb and not the underlying action taken This can be a common source of confusion Generating database credentials creates database credentials but the HTTP request is a GET which corresponds to a read capability Thus to grant access to generate database credentials the policy would grant read access on the appropriate path Templated policies The policy syntax allows for doing variable replacement in some policy strings with values available to the token Currently identity information can be injected and currently the path keys in policies allow injection Parameters Name Description identity entity id The entity s ID identity entity name The entity s name identity entity metadata metadata key Metadata associated with the entity for the given key identity entity aliases mount accessor id Entity alias ID for the given mount identity entity aliases mount accessor name Entity alias name for the given mount identity entity aliases mount accessor metadata metadata key Metadata associated with the alias for the given mount and metadata key identity entity aliases mount accessor custom metadata custom metadata key Custom metadata associated with the alias for the given mount and custom metadata key identity groups ids group id name The group name for the given group ID identity groups names group name id The group ID for the given group name identity groups ids group id metadata metadata key Metadata associated with the group for the given key identity groups names group name metadata metadata key Metadata associated with the group for the given key Examples The following policy creates a section of the KVv2 Secret Engine to a specific user hcl path secret data capabilities create update patch read delete path secret metadata capabilities list If you wanted to create a shared section of KV that is associated with entities that are in a group hcl In the example below the group ID maps a group and the path path secret data groups capabilities create update patch read delete path secret metadata groups capabilities list Note When developing templated policies use IDs wherever possible Each ID is unique to the user whereas names can change over time and can be reused This ensures that if a given user or group name is changed the policy will be mapped to the intended entity or group If you want to use the metadata associated with an authentication plugin in your templates you will need to get its mount accessor and access it via the aliases key You can get the mount accessor value using the following command shellsession vault auth list Path Type Accessor Description kubernetes kubernetes auth kubernetes xxxx n a token token auth token yyyy token based credentials The following templated policy allow to read the path associated with the Kubernetes service account namespace of the identity hcl path secret data capabilities read Fine grained control In addition to the standard set of capabilities Vault offers finer grained control over permissions at a given path The capabilities associated with a path take precedence over permissions on parameters Parameter constraints Note The use of globs vault docs concepts policies policy syntax may result in surprising or unexpected behavior parameter constraints limitations Note The allowed parameters denied parameters and required parameters fields are not supported for policies used with the version 2 kv secrets engine vault docs secrets kv kv v2 See the API Specification vault api docs secret kv kv v2 for more information Policies can take into account HTTP request parameters to further constrain requests using the following options required parameters A list of parameters that must be specified hcl This requires the user to create secret profile with a parameter key named name and id where kv v1 is enabled at secret path secret profile capabilities create required parameters name id allowed parameters A list of keys and values that are permitted on the given path Setting a parameter with a value of the empty list allows the parameter to contain any value hcl This allows the user to update the password parameter value set on any users configured for userpass auth method The password value can be anything However the user cannot update other parameter values such as token ttl path auth userpass users capabilities update allowed parameters password Usage example The ACL Policy Path Templating vault tutorials policies policy templating tutorial demonstrates the use of allowed parameters to permit a user to update the user s password when using the userpass auth method vault docs auth userpass to log in with Vault Setting a parameter with a value of a populated list allows the parameter to contain only those values hcl This allows the user to create or update an encryption key for transit secrets engine enabled at transit When you do you can set the auto rotate period parameter value so that the key gets rotated However the rotation period must be 8h 24h or 5d Any other value will result in an error path transit keys capabilities create update allowed parameters auto rotate period 8h 24h 5d If any keys are specified all non specified parameters will be denied unless the parameter is set to an empty array which will allow all other parameters to be modified Parameters with specific values will still be restricted to those values hcl When kv v1 secrets engine is enabled at secret this allows the user to create secret foo with a parameter named bar The parameter bar can only contain the values zip or zap but any other parameters may be created with any value path secret foo capabilities create allowed parameters bar zip zap denied parameters A list of keys and values that are not permitted on the given path Any values specified here take precedence over allowed parameters Setting a parameter with a value of the empty list denies any changes to that parameter hcl This allows the user to update the userpass auth method s user configurations e g password but cannot update the token policies and policies parameter values path auth userpass users capabilities update denied parameters token policies policies Setting a parameter with a value of a populated list denies any parameter containing those values hcl This allows the user to create or update token roles However the allowed policies parameter value cannot be admin but the user can assign any other policies to the parameter path auth token roles capabilities create update denied parameters allowed policies admin Setting to will deny any parameter hcl This allows the user to create or update an encryption key for transit secrets engine enabled at transit However the user cannot set any of the configuration parameters As a result the created key will have all parameters set to default values path transit keys capabilities create update denied parameters If any parameters are specified all non specified parameters are allowed unless allowed parameters is also set in which case normal rules apply Parameter values also support prefix suffix globbing Globbing is enabled by prepending or appending or prepending a splat to the value hcl Only allow a parameter named bar with a value starting with foo path secret foo capabilities create allowed parameters bar foo Note the only value that can be used with the parameter is Parameter constraints limitations Default values Evaluation of policies with allowed parameters denied parameters and required parameters happens without consideration of parameters default values Given the following policy hcl The no store parameter cannot be false path secret foo capabilities create denied parameters no store false false The following operation will error because no store is set to false shell session vault write secret foo no store false value bar Whereas the following operation will succeed even if the no store parameter must be a boolean and it defaults to false shell session Succeeds because no store false isn t present in the parameters vault write secret foo value bar This is because the policy evaluator does not know what the default value is for the no store parameter All it sees is that the denied parameter isn t present in the command This can be resolved by requiring the no store parameter in your policy hcl path secret foo capabilities create required parameters no store denied parameters no store false false The following command which previously succeeded will now fail under the new policy because there is no no store parameter shell session vault write secret foo value bar Globbing It s also important to note that the use of globbing may result in surprising or unexpected behavior hcl This allows the user to create update or patch secret foo with a parameter named bar the values passed to parameter bar must start with baz so values like baz quux are fine however values like baz quux wibble wobble wubble would also be accepted the API that underlies secret foo might allow comma delimited values for the bar parameter and if it did specifying a value like baz quux wibble wobble wubble would result in 4 different values getting passed along seeing values like wibble or wobble getting passed to secret foo might surprise someone that expected the allowed parameters constraint to only allow values starting with baz path secret foo capabilities create update patch allowed parameters bar baz Required response wrapping TTLs These parameters can be used to set minimums maximums on TTLs set by clients when requesting that a response be wrapped vault docs concepts response wrapping with a granularity of a second These use duration format strings vault docs concepts duration format In practice setting a minimum TTL of one second effectively makes response wrapping mandatory for a particular path min wrapping ttl The minimum allowed TTL that clients can specify for a wrapped response In practice setting a minimum TTL of one second effectively makes response wrapping mandatory for a particular path It can also be used to ensure that the TTL is not too low leading to end targets being unable to unwrap before the token expires max wrapping ttl The maximum allowed TTL that clients can specify for a wrapped response hcl This effectively makes response wrapping mandatory for this path by setting min wrapping ttl to 1 second This also sets this path s wrapped response maximum allowed TTL to 90 seconds path auth approle role my role secret id capabilities create update min wrapping ttl 1s max wrapping ttl 90s If both are specified the minimum value must be less than the maximum In addition if paths are merged from different stanzas the lowest value specified for each is the value that will result in line with the idea of keeping token lifetimes as short as possible Built in policies Vault has two built in policies default and root This section describes the two built in policies Default policy The default policy is a built in Vault policy that cannot be removed By default it is attached to all tokens but may be explicitly excluded at token creation time by supporting authentication methods The policy contains basic functionality such as the ability for the token to look up data about itself and to use its cubbyhole data However Vault is not prescriptive about its contents It can be modified to suit your needs Vault will never overwrite your modifications If you want to stay up to date with the latest upstream version of the default policy simply read the contents of the policy from an up to date dev server and write those contents into your Vault s default policy To view all permissions granted by the default policy on your Vault installation run shell session vault read sys policy default To disable attachment of the default policy shell session vault token create no default policy or via the API shell session curl request POST header X Vault Token data no default policy true https vault hashicorp rocks v1 auth token create Root policy The root policy is a built in Vault policy that cannot be modified or removed Any user associated with this policy becomes a root user A root user can do anything within Vault As such it is highly recommended that you revoke any root tokens before running Vault in production When a Vault server is first initialized there always exists one root user This user is used to do the initial configuration and setup of Vault After configured the initial root token should be revoked and more strictly controlled users and authentication should be used To revoke a root token run shell session vault token revoke token or via the API shell session curl request POST header X Vault Token data token token https vault hashicorp rocks v1 auth token revoke For more information please read Production Hardening vault tutorials operations production hardening Generating a Root Token vault tutorials operations generate root Managing policies Policies are authored written in your editor of choice They can be authored in HCL or JSON and the syntax is described in detail above Once saved policies must be uploaded to Vault before they can be used Listing policies To list all registered policies in Vault shell session vault read sys policy or via the API shell session curl header X Vault Token https vault hashicorp rocks v1 sys policy Note You may also see the CLI command vault policies This is a convenience wrapper around reading the sys endpoint directly It provides the same functionality but formats the output in a special manner Creating policies Policies may be created uploaded via the CLI or via the API To create a new policy in Vault shell session vault policy write policy name policy file hcl or via the API shell session curl request POST header X Vault Token data policy path https vault hashicorp rocks v1 sys policy policy name In both examples the name of the policy is policy name You can think of this name as a pointer or symlink to the policy ACLs Tokens are attached policies by name which are then mapped to the set of rules corresponding to that name Updating policies Existing policies may be updated to change permissions via the CLI or via the API To update an existing policy in Vault follow the same steps as creating a policy but use an existing policy name shell session vault write sys policy my existing policy policy updated policy json or via the API shell session curl request POST header X Vault Token data policy path https vault hashicorp rocks v1 sys policy my existing policy Deleting policies Existing policies may be deleted via the CLI or API To delete a policy shell session vault delete sys policy policy name or via the API shell session curl request DELETE header X Vault Token https vault hashicorp rocks v1 sys policy policy name This is an idempotent operation Vault will not return an error when deleting a policy that does not exist Associating policies Vault can automatically associate a set of policies to a token based on an authorization This configuration varies significantly between authentication backends For simplicity this example will use Vault s built in userpass auth method A Vault administrator or someone from the security team would create the user in Vault with a list of associated policies shell session vault write auth userpass users sethvargo password s3cr3t policies dev readonly logs This creates an authentication mapping to the policy such that when the user authenticates successfully to Vault they will be given a token which has the list of policies attached The user wishing to authenticate would run shell session vault login method userpass username sethvargo Password will be hidden If the provided information is correct Vault will generate a token assign the list of configured policies to the token and return that token to the authenticated user Root protected API endpoints Note Vault treats the HTTP POST and PUT verbs as equivalent so for each mention of POST in the table below PUT may also be used Vault uses the non standard LIST HTTP verb but also allows list requests to be made using the GET verb along with list true as a query parameter so for each mention of LIST in the table above GET with list true may also be used The following paths requires a root token or sudo capability in the policy Path HTTP verb Description auth token accessors vault api docs auth token list accessors LIST List token accessors for all current Vault service tokens auth token create vault api docs auth token create token POST Create a periodic or an orphan token period or no parent option auth token revoke orphan vault api docs auth token revoke token and orphan children POST Revoke a token but not its child tokens which will be orphaned pki root vault api docs secret pki delete all issuers and keys DELETE Delete the current CA key pki secrets engine vault docs secrets pki pki root sign self issued vault api docs secret pki sign self issued POST Use the configured CA certificate to sign a self issued certificate pki secrets engine vault docs secrets pki sys audit vault api docs system audit GET List enabled audit devices sys audit path vault api docs system audit POST DELETE Enable or remove an audit device sys auth path vault api docs system auth GET POST DELETE Manage the auth methods enable read and delete sys auth path tune vault api docs system auth tune auth method GET POST Manage the auth methods enable read delete and tune sys config auditing request headers vault api docs system config auditing GET List the request headers that are configured to be audited sys config auditing request headers name vault api docs system config auditing GET POST DELETE Manage the auditing headers create update read and delete sys config cors vault api docs system config cors GET POST DELETE Configure CORS setting sys config ui headers vault api docs system config ui GET LIST Configure the UI settings sys config ui headers name vault api docs system config ui name POST DELETE Configure custom HTTP headers to be served with the UI sys internal inspect router tag vault api docs system inspect router GET Inspect the internal components of Vault s router tag must be one of root uuid accessor or storage sys leases lookup prefix vault api docs system leases list leases LIST List lease IDs sys leases revoke force prefix vault api docs system leases revoke force POST Revoke all secrets or tokens ignoring backend errors sys leases revoke prefix prefix vault api docs system leases revoke prefix POST Revoke all secrets generated under a given prefix sys plugins catalog type name vault api docs system plugins catalog register plugin GET POST DELETE Register a new plugin or read remove an existing plugin sys raw path vault api docs system raw GET POST DELETE Used to access the raw underlying store in Vault sys raw prefix vault api docs system raw list raw GET LIST Returns a list keys for a given path prefix sys remount vault api docs system remount POST Moves an already mounted backend to a new mount point sys replication reindex vault api docs system replication reindex replication POST Reindex the local data storage sys replication performance primary secondary token vault api docs system replication replication performance generate performance secondary token POST Generate a performance secondary activation token sys replication dr primary secondary token vault api docs system replication replication dr generate dr secondary token POST Generate a DR secondary activation token sys rotate vault api docs system rotate POST Trigger a rotation of the backend encryption key sys seal vault api docs system seal POST Seals the Vault sys step down vault api docs system step down POST Forces a node to give up active status sys storage raft snapshot auto config vault api docs system storage raftautosnapshots list automated snapshots configs LIST Lists named configurations sys storage raft snapshot auto config name vault api docs system storage raftautosnapshots GET POST DELETE Creates or updates a named configuration Tokens Tokens have two sets of policies identity policies which are computed based on the entity and its groups and token policies which are either defined based on the login method or in the case of explicit token creates via the API are an input to the token creation What follows concerns token policies exclusively a token s identity policies cannot be controlled except by modifying the underlying entities groups and group memberships Tokens are associated with their policies at creation time For example shell session vault token create policy dev readonly policy logs Normally the only policies that may be specified are those which are present in the current token s i e the new token s parent s token policies However root users can assign any policies There is no way to modify the policies associated with a token once the token has been issued The token must be revoked and a new one acquired to receive a new set of policies However the contents of policies are parsed in real time whenever the token is used As a result if a policy is modified the modified rules will be in force the next time a token with that policy attached is used to make a call to Vault UI policy requirements include ui policy requirements mdx Tutorial Refer to the following tutorials for further learning Vault Policies vault tutorials policies policies ACL Policy Path Templating vault tutorials policies policy templating |
vault include alerts enterprise only mdx page title Filtering Filter expressions in Vault layout docs An introduction to the filtering syntax used in Vault | ---
layout: docs
page_title: Filtering
description: >-
An introduction to the filtering syntax used in Vault.
---
# Filter expressions in Vault
@include 'alerts/enterprise-only.mdx'
Filter expressions use matching operators and selector values to parse
out important or relevant information. In some situations, you can use filter
expressions to control how Vault processes results.
## Filter expression syntax
Basic filter expressions are always written in plain text with a
**matching operator**, a **selector**, and a **selector value**.
- the **matching operator** tells Vault how to compare the selector and selector
value.
- the **selector** is a [JSON pointer](https://tools.ietf.org/html/rfc6901) that
indicates which field or parameter in a JSON object to consider.
- the **selector value** is a JSON pointer, number, or string that defines a
pattern Vault can filter against.
For example, in the filter expression:
```text
product/name == "Vault"
```
- Equality (`==`) is the matching operator.
- The JSON pointer `product/name` is the selector.
- The string "Vault" is the selector value.
Complex filter expressions also allow Boolean logic and parenthesis. For example:
```text
(product/name == "Vault") and (timestamp < "2024-02-01")
```
When parsing filter expressions, Vault ignores whitespace unless the whitespace
is part of a literal string.
Filter expression
`product/name=="Vault"` and `product/name == "Vault"` generate the same results
while `product/name == " Vault "` and `product/name == "Vault"` generate
different results.
<Note title="Selectors are not universal">
Filtering-enabled endpoints can support different selectors. Make sure to
consult the API documentation for a given endpoint when constructing your
filter expressions.
</Note>
<Tabs>
<Tab heading="Matching operators">
```text
// Equality & Inequality checks
<Selector> == "<Value>"
<Selector> != "<Value>"
// Emptiness checks
<Selector> is empty
<Selector> is not empty
// Contains checks or Substring Matching
"<Value>" in <Selector>
"<Value>" not in <Selector>
<Selector> contains "<Value>"
<Selector> not contains "<Value>"
// Regular Expression Matching
<Selector> matches "<Value>"
<Selector> not matches "<Value>"
```
</Tab>
<Tab heading="Selectors">
Selectors must be valid JSON pointers enclosed in quotes with a leading slash (`/`).
JSON pointers use forward slashes to define paths through a JSON block. For
example, to target the product name in:
```json
{ "product":
{
"name": "Vault",
"version": "1.16.0"
},
{
"name": "Boundary",
"version": "0.15.0"
}
}
```
The selector would be `/product/name`.
</Tab>
<Tab heading="Selector values">
Selector values can be any valid selector, integer, floating point number, or
string. Numbers and strings should be quoted in double quotes or backticks.
Strings quoted in backticks are treated as literal values and escape sequences
like `\n` are not expanded.
| Value | Type | Expanded value |
|-------------------|---------|-------------------|
| "Vault\tBoundary" | string | "Vault Boundary" |
| `Vault\tBoundary` | string | "Vault\tBoundary" |
| "10" | integer | "10" |
| `10` | integer | "10" |
| "0.75" | float | "0.75" |
</Tab>
</Tabs>
## Complex expressions
Complex expressions combine basic expressions with logical operators, grouping, and matching expressions.
```text
// Logical Or - evaluates to true if either sub-expression does
<Expression 1> or <Expression 2>
// Logical And - evaluates to true if both sub-expressions do
<Expression 1 > and <Expression 2>
// Logical Not - evaluates to true if the sub-expression does not
not <Expression 1>
// Grouping - Overrides normal precedence rules
( <Expression 1> )
// Inspects data to check for a match
<Matching Expression 1>
```
Vault uses standard operator precedence when resolving complex
expressions. For example, the expression
`<Expression 1> and not <Expression 2> or <Expression 3>` resolves
the same as
`( <Expression 1> and (not <Expression 2> )) or <Expression 3>`.
## Performance
Filters consume a portion of CPU time on the Vault node where they run.
<Note title="Regular expressions">
Using multiple/complex expressions including regular expressions
(regex) will have a larger impact on performance than fewer/simpler filters.
</Note>
Always test your filters in pre-production environments to ensure correctness.
Ideally you should [codify your management of Vault](/vault/tutorials/operations/codify-mgmt-vault-terraform)
using tools such as [Terraform](https://www.terraform.io/), to prevent accidentally enabling an audit device
in a production environment with untested/incorrect settings.
Finally, always ensure you profile production-like workloads within your pre-production
environments in order to accurately assess the performance of Vault. | vault | layout docs page title Filtering description An introduction to the filtering syntax used in Vault Filter expressions in Vault include alerts enterprise only mdx Filter expressions use matching operators and selector values to parse out important or relevant information In some situations you can use filter expressions to control how Vault processes results Filter expression syntax Basic filter expressions are always written in plain text with a matching operator a selector and a selector value the matching operator tells Vault how to compare the selector and selector value the selector is a JSON pointer https tools ietf org html rfc6901 that indicates which field or parameter in a JSON object to consider the selector value is a JSON pointer number or string that defines a pattern Vault can filter against For example in the filter expression text product name Vault Equality is the matching operator The JSON pointer product name is the selector The string Vault is the selector value Complex filter expressions also allow Boolean logic and parenthesis For example text product name Vault and timestamp 2024 02 01 When parsing filter expressions Vault ignores whitespace unless the whitespace is part of a literal string Filter expression product name Vault and product name Vault generate the same results while product name Vault and product name Vault generate different results Note title Selectors are not universal Filtering enabled endpoints can support different selectors Make sure to consult the API documentation for a given endpoint when constructing your filter expressions Note Tabs Tab heading Matching operators text Equality Inequality checks Selector Value Selector Value Emptiness checks Selector is empty Selector is not empty Contains checks or Substring Matching Value in Selector Value not in Selector Selector contains Value Selector not contains Value Regular Expression Matching Selector matches Value Selector not matches Value Tab Tab heading Selectors Selectors must be valid JSON pointers enclosed in quotes with a leading slash JSON pointers use forward slashes to define paths through a JSON block For example to target the product name in json product name Vault version 1 16 0 name Boundary version 0 15 0 The selector would be product name Tab Tab heading Selector values Selector values can be any valid selector integer floating point number or string Numbers and strings should be quoted in double quotes or backticks Strings quoted in backticks are treated as literal values and escape sequences like n are not expanded Value Type Expanded value Vault tBoundary string Vault Boundary Vault tBoundary string Vault tBoundary 10 integer 10 10 integer 10 0 75 float 0 75 Tab Tabs Complex expressions Complex expressions combine basic expressions with logical operators grouping and matching expressions text Logical Or evaluates to true if either sub expression does Expression 1 or Expression 2 Logical And evaluates to true if both sub expressions do Expression 1 and Expression 2 Logical Not evaluates to true if the sub expression does not not Expression 1 Grouping Overrides normal precedence rules Expression 1 Inspects data to check for a match Matching Expression 1 Vault uses standard operator precedence when resolving complex expressions For example the expression Expression 1 and not Expression 2 or Expression 3 resolves the same as Expression 1 and not Expression 2 or Expression 3 Performance Filters consume a portion of CPU time on the Vault node where they run Note title Regular expressions Using multiple complex expressions including regular expressions regex will have a larger impact on performance than fewer simpler filters Note Always test your filters in pre production environments to ensure correctness Ideally you should codify your management of Vault vault tutorials operations codify mgmt vault terraform using tools such as Terraform https www terraform io to prevent accidentally enabling an audit device in a production environment with untested incorrect settings Finally always ensure you profile production like workloads within your pre production environments in order to accurately assess the performance of Vault |
vault Production hardening Harden your Vault deployments for production operations page title Production hardening layout docs You can use the best practices in this document to harden Vault when planning | ---
layout: docs
page_title: Production hardening
description: >-
Harden your Vault deployments for production operations.
---
# Production hardening
You can use the best practices in this document to harden Vault when planning
your production deployment. These recommendations follow the
[Vault security model](/vault/docs/internals/security), and focus on defense
in depth.
You should follow the **baseline recommendations** if at all possible for any
production Vault deployment. The **extended recommendations** detail extra
layers of security which may require more administrative overhead, and might
not be suitable for every deployment.
## Baseline recommendations
- **Do not run as root**. Use a dedicated, unprivileged service account to run
Vault, rather than running as the root or Administrator account. Vault is
designed to run as an unprivileged user, and doing so adds significant
defense against various privilege-escalation attacks.
- **Allow minimal write privileges**. The unprivileged Vault service account
should not have access to overwrite its executable binary or any Vault
configuration files. Limit what is writable by the Vault user to just
directories and files for local Vault storage (for example, Integrated
Storage) or file audit device logs.
- **Use end-to-end TLS**. You should always use Vault with TLS in production.
If you use intermediate load balancers or reverse proxies to front Vault,
you should enable TLS for all network connections between every part of the
system (including external storage) to ensure encryption of all traffic in
transit to and from Vault. When possible, you should set the HTTP Strict
Transport Security (HSTS) header using Vault's [custom response headers](/vault/docs/configuration/listener/tcp#configuring-custom-http-response-headers) feature.
- **Disable swap**. Vault encrypts data in transit and at rest, however it must
still have sensitive data in memory to function. Risk of exposure should be
minimized by disabling swap to prevent the operating system from paging
sensitive data to disk. Disabling swap is even more critical when your
Vault deployment uses Integrated Storage.
- **Disable core dumps**. A user or administrator that can force a core dump
and has access to the resulting file can potentially access Vault encryption
keys. Preventing core dumps is a platform-specific process; on Linux setting
the resource limit `RLIMIT_CORE` to `0` disables core dumps. In the systemd
service unit file, setting `LimitCORE=0` will enforce this setting for the
Vault service.
- **Use single tenancy**. Vault should be the sole user process running on a
machine. This reduces the risk that another process running on the same
machine gets compromised and gains the ability to interact with the Vault
process. Similarly, you should prefer running Vault on bare metal instead
of a virtual machine, and you prefer running in a virtual machine instead
of running in a containerized environment.
- **Firewall traffic**. Use a local firewall or network security features of
your cloud provider to restrict incoming and outgoing traffic to Vault and
essential system services like NTP. This includes restricting incoming
traffic to permitted sub-networks and outgoing traffic to services Vault
needs to connect to, such as databases.
- **Avoid root tokens**. When you initialize Vault, it emits an initial
root token. You should use this token just to perform initial setup,
such as enabling auth methods so that users can authenticate. You should
treat Vault [configuration as
code](https://www.hashicorp.com/blog/codifying-vault-policies-and-configuration/),
and use version control to manage policies. Once you complete initial Vault
setup, you should revoke the initial root token to reduce risk of exposure. Root tokens can be
[generated when needed](/vault/docs/commands/operator/generate-root), and should be
revoked when no longer needed.
- **Configure user lockout**. Vault provides a [user lockout](/vault/docs/concepts/user-lockout) function
for the [approle](/vault/docs/auth/approle), [ldap](/vault/docs/auth/ldap) and [userpass](/vault/docs/auth/userpass)
auth methods. **Vault enables user lockout by default**. Verify the lockout threshold, and lockout duration matches your organizations security policies.
- **Enable audit device logs**. Vault supports several [audit
devices](/vault/docs/audit). When you enable audit device logs, you gain
a detailed history of all operations performed by Vault, and a forensics
trail in the case of misuse or compromise. Audit logs [securely
hash](/vault/docs/audit#sensitive-information)
sensitive data, but you should still restrict access to prevent any
unintended information disclosure.
- **Disable shell command history**. You may want the `vault` command itself to
not appear in history at all.
- **Keep a frequent upgrade cadence**. Vault is actively developed, and you
should upgrade Vault often to incorporate security fixes and any changes in
default settings such as key lengths or cipher suites. Subscribe to the
[HashiCorp Announcement mailing list](https://groups.google.com/g/hashicorp-announce)
to receive announcements of new releases and visit the [Vault
CHANGELOG](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md) for
details on the changes made in each release.
- **Synchronize clocks**. Use NTP or whatever mechanism is appropriate for your
environment to ensure that all the Vault nodes agree about what time it is.
Vault uses the clock for things like enforcing TTLs and setting dates in PKI
certificates, and if the nodes have significant clock skew, a failover can wreak havoc.
- **Restrict storage access**. Vault encrypts all data at rest, regardless of
which storage type it uses. Although Vault encrypts the data, an [attacker
with arbitrary
control](/vault/docs/internals/security) can cause
data corruption or loss by modifying or deleting keys. You should restrict
storage access outside of Vault to avoid unauthorized access or operations.
- **Do not use clear text credentials**. The Vault configuration [`seal`
stanza](/vault/docs/configuration/seal) configures the seal type to use for
extra data protection such as using HSM or Cloud KMS solutions to encrypt and
decrypt the root key. **DO NOT** store your cloud credentials or HSM pin in
clear text within the `seal` stanza. If you host the Vault server on the same
cloud platform as the KMS service, use the platform-specific identity
solutions. For example:
- [Resource Access Management (RAM) on AliCloud](/vault/docs/configuration/seal/alicloudkms#authentication)
- [Instance Profiles on AWS](/vault/docs/configuration/seal/awskms#authentication)
- [Managed Service Identities (MSI) on Azure](/vault/docs/configuration/seal/azurekeyvault#authentication)
- [Service Account on Google Cloud Platform](/vault/docs/configuration/seal/gcpckms#authentication-permissions)
When using platform-specific identity solutions, you should be mindful of auth
method and secret engine configuration within namespaces. You can share
platform identity across Vault namespaces, as these provider features
generally offer host-based identity solutions.
If that is not applicable, set the credentials as environment variables
(for example, `VAULT_HSM_PIN`).
- **Use the safest algorithms available**. [Vault's TLS listener](/vault/docs/configuration/listener/tcp#tls_cipher_suites)
supports a variety of legacy algorithms for backwards compatibility. While
these algorithms are available, they are not recommended for use when
a stronger alternative is available. If possible, use TLS 1.3 to ensure
that modern encryption algorithms encrypt data in transit and offer
forward secrecy.
- **Follow best practices for plugins**. While HashiCorp-developed plugins
generally default to a safe configuration, you should be mindful of
misconfigured or malicious Vault plugins. These plugin issues can harm the
security posture of your Vault deployment.
- **Be aware of non-deterministic configuration file merging**. Vault's
configuration file merging is non-deterministic, and inconsistencies in
settings between files can lead to inconsistencies in Vault settings.
Ensure set configurations are consistent across all files (and any files merged together get denoted by a `-config` flag).
- **Use correct filesystem permissions**. Always ensure appropriate permissions
get applied to files before starting Vault. This is even more critical for files which contain sensitive information.
- **Use standard input for vault secrets**. [Vault login](/vault/docs/commands/login)
and [Vault unseal](/vault/api-docs/system/unseal#key) allow operators to
give secret values from either standard input or with command-line arguments.
Command-line arguments can persisted in shell history, and are readable by other unprivileged users on the same host.
- **Develop an off-boarding process**. Removing accounts in Vault or associated
identity providers may not immediately revoke [token-based access](/vault/docs/concepts/tokens#user-management-considerations).
Depending on how you manage access to Vault, operators should consider:
- Removing the entity from groups granting access to resources.
- [Revoking](/vault/docs/concepts/lease#prefix-based-revocation) the active leases for a given user account.
- Deleting the canonical entity of the user after removing accounts in Vault or associated identity providers.
Deleting the canonical entity alone is insufficient as one is automatically created on successful login if it does not exist.
- [Disabling](/vault/docs/commands/auth/disable) auth methods instead of deleting them, which revokes all
tokens generated by this auth method.
- **Use short TTLs** When possible, credentials issued from Vault (for example
tokens, X.509 certificates) should be short-lived, as to guard against their potential compromise, and reduce the need to use revocation methods.
## Extended recommendations
- **Disable SSH / remote desktop**. When running a Vault as a single tenant
application, users should never access the machine directly. Instead, they
should access Vault through its API over the network. Use a centralized
logging and telemetry solution for debugging. Be sure to restrict access to
logs as need to know.
- **Use systemd security features**. Systemd provides a number of features
that you can use to lock down access to the filesystem and to
administrative capabilities. The service unit file provided with the
official Vault Linux packages sets a number of these by default, including:
```plaintext
ProtectSystem=full
PrivateTmp=yes
CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
AmbientCapabilities=CAP_IPC_LOCK
ProtectHome=read-only
PrivateDevices=yes
NoNewPrivileges=yes
```
See the [systemd.exec manual page](https://www.freedesktop.org/software/systemd/man/systemd.exec.html) for more details.
- **Perform immutable upgrades**. Vault relies on external storage for
persistence, and this decoupling allows the servers running Vault to be
immutably managed. When you upgrade to a new version, you can bring new
servers with the upgraded version of Vault online. You can attach the new
servers to the same shared storage and unseal them. Then you can destroy the
older version servers. This reduces the need for remote access and upgrade orchestration which may introduce security gaps.
- **Configure SELinux / AppArmor**. Using mechanisms like
[SELinux](https://github.com/hashicorp/vault-selinux-policies)
and AppArmor can help you gain layers of security when using Vault.
While Vault can run on several popular operating systems, Linux is
recommended due to the various security primitives mentioned here.
- **Adjust user limits**. It is possible that your Linux distribution enforces
strict process user limits (`ulimits`). Consider a review of `ulimits` for maximum amount of open files, connections, etc. before going into production. You might need to increase the default values to avoid errors about too
many open files.
- **Be aware of special container considerations**. To use memory locking
(mlock) inside a Vault container, you need to use the `overlayfs2` or another
supporting driver. | vault | layout docs page title Production hardening description Harden your Vault deployments for production operations Production hardening You can use the best practices in this document to harden Vault when planning your production deployment These recommendations follow the Vault security model vault docs internals security and focus on defense in depth You should follow the baseline recommendations if at all possible for any production Vault deployment The extended recommendations detail extra layers of security which may require more administrative overhead and might not be suitable for every deployment Baseline recommendations Do not run as root Use a dedicated unprivileged service account to run Vault rather than running as the root or Administrator account Vault is designed to run as an unprivileged user and doing so adds significant defense against various privilege escalation attacks Allow minimal write privileges The unprivileged Vault service account should not have access to overwrite its executable binary or any Vault configuration files Limit what is writable by the Vault user to just directories and files for local Vault storage for example Integrated Storage or file audit device logs Use end to end TLS You should always use Vault with TLS in production If you use intermediate load balancers or reverse proxies to front Vault you should enable TLS for all network connections between every part of the system including external storage to ensure encryption of all traffic in transit to and from Vault When possible you should set the HTTP Strict Transport Security HSTS header using Vault s custom response headers vault docs configuration listener tcp configuring custom http response headers feature Disable swap Vault encrypts data in transit and at rest however it must still have sensitive data in memory to function Risk of exposure should be minimized by disabling swap to prevent the operating system from paging sensitive data to disk Disabling swap is even more critical when your Vault deployment uses Integrated Storage Disable core dumps A user or administrator that can force a core dump and has access to the resulting file can potentially access Vault encryption keys Preventing core dumps is a platform specific process on Linux setting the resource limit RLIMIT CORE to 0 disables core dumps In the systemd service unit file setting LimitCORE 0 will enforce this setting for the Vault service Use single tenancy Vault should be the sole user process running on a machine This reduces the risk that another process running on the same machine gets compromised and gains the ability to interact with the Vault process Similarly you should prefer running Vault on bare metal instead of a virtual machine and you prefer running in a virtual machine instead of running in a containerized environment Firewall traffic Use a local firewall or network security features of your cloud provider to restrict incoming and outgoing traffic to Vault and essential system services like NTP This includes restricting incoming traffic to permitted sub networks and outgoing traffic to services Vault needs to connect to such as databases Avoid root tokens When you initialize Vault it emits an initial root token You should use this token just to perform initial setup such as enabling auth methods so that users can authenticate You should treat Vault configuration as code https www hashicorp com blog codifying vault policies and configuration and use version control to manage policies Once you complete initial Vault setup you should revoke the initial root token to reduce risk of exposure Root tokens can be generated when needed vault docs commands operator generate root and should be revoked when no longer needed Configure user lockout Vault provides a user lockout vault docs concepts user lockout function for the approle vault docs auth approle ldap vault docs auth ldap and userpass vault docs auth userpass auth methods Vault enables user lockout by default Verify the lockout threshold and lockout duration matches your organizations security policies Enable audit device logs Vault supports several audit devices vault docs audit When you enable audit device logs you gain a detailed history of all operations performed by Vault and a forensics trail in the case of misuse or compromise Audit logs securely hash vault docs audit sensitive information sensitive data but you should still restrict access to prevent any unintended information disclosure Disable shell command history You may want the vault command itself to not appear in history at all Keep a frequent upgrade cadence Vault is actively developed and you should upgrade Vault often to incorporate security fixes and any changes in default settings such as key lengths or cipher suites Subscribe to the HashiCorp Announcement mailing list https groups google com g hashicorp announce to receive announcements of new releases and visit the Vault CHANGELOG https github com hashicorp vault blob main CHANGELOG md for details on the changes made in each release Synchronize clocks Use NTP or whatever mechanism is appropriate for your environment to ensure that all the Vault nodes agree about what time it is Vault uses the clock for things like enforcing TTLs and setting dates in PKI certificates and if the nodes have significant clock skew a failover can wreak havoc Restrict storage access Vault encrypts all data at rest regardless of which storage type it uses Although Vault encrypts the data an attacker with arbitrary control vault docs internals security can cause data corruption or loss by modifying or deleting keys You should restrict storage access outside of Vault to avoid unauthorized access or operations Do not use clear text credentials The Vault configuration seal stanza vault docs configuration seal configures the seal type to use for extra data protection such as using HSM or Cloud KMS solutions to encrypt and decrypt the root key DO NOT store your cloud credentials or HSM pin in clear text within the seal stanza If you host the Vault server on the same cloud platform as the KMS service use the platform specific identity solutions For example Resource Access Management RAM on AliCloud vault docs configuration seal alicloudkms authentication Instance Profiles on AWS vault docs configuration seal awskms authentication Managed Service Identities MSI on Azure vault docs configuration seal azurekeyvault authentication Service Account on Google Cloud Platform vault docs configuration seal gcpckms authentication permissions When using platform specific identity solutions you should be mindful of auth method and secret engine configuration within namespaces You can share platform identity across Vault namespaces as these provider features generally offer host based identity solutions If that is not applicable set the credentials as environment variables for example VAULT HSM PIN Use the safest algorithms available Vault s TLS listener vault docs configuration listener tcp tls cipher suites supports a variety of legacy algorithms for backwards compatibility While these algorithms are available they are not recommended for use when a stronger alternative is available If possible use TLS 1 3 to ensure that modern encryption algorithms encrypt data in transit and offer forward secrecy Follow best practices for plugins While HashiCorp developed plugins generally default to a safe configuration you should be mindful of misconfigured or malicious Vault plugins These plugin issues can harm the security posture of your Vault deployment Be aware of non deterministic configuration file merging Vault s configuration file merging is non deterministic and inconsistencies in settings between files can lead to inconsistencies in Vault settings Ensure set configurations are consistent across all files and any files merged together get denoted by a config flag Use correct filesystem permissions Always ensure appropriate permissions get applied to files before starting Vault This is even more critical for files which contain sensitive information Use standard input for vault secrets Vault login vault docs commands login and Vault unseal vault api docs system unseal key allow operators to give secret values from either standard input or with command line arguments Command line arguments can persisted in shell history and are readable by other unprivileged users on the same host Develop an off boarding process Removing accounts in Vault or associated identity providers may not immediately revoke token based access vault docs concepts tokens user management considerations Depending on how you manage access to Vault operators should consider Removing the entity from groups granting access to resources Revoking vault docs concepts lease prefix based revocation the active leases for a given user account Deleting the canonical entity of the user after removing accounts in Vault or associated identity providers Deleting the canonical entity alone is insufficient as one is automatically created on successful login if it does not exist Disabling vault docs commands auth disable auth methods instead of deleting them which revokes all tokens generated by this auth method Use short TTLs When possible credentials issued from Vault for example tokens X 509 certificates should be short lived as to guard against their potential compromise and reduce the need to use revocation methods Extended recommendations Disable SSH remote desktop When running a Vault as a single tenant application users should never access the machine directly Instead they should access Vault through its API over the network Use a centralized logging and telemetry solution for debugging Be sure to restrict access to logs as need to know Use systemd security features Systemd provides a number of features that you can use to lock down access to the filesystem and to administrative capabilities The service unit file provided with the official Vault Linux packages sets a number of these by default including plaintext ProtectSystem full PrivateTmp yes CapabilityBoundingSet CAP SYSLOG CAP IPC LOCK AmbientCapabilities CAP IPC LOCK ProtectHome read only PrivateDevices yes NoNewPrivileges yes See the systemd exec manual page https www freedesktop org software systemd man systemd exec html for more details Perform immutable upgrades Vault relies on external storage for persistence and this decoupling allows the servers running Vault to be immutably managed When you upgrade to a new version you can bring new servers with the upgraded version of Vault online You can attach the new servers to the same shared storage and unseal them Then you can destroy the older version servers This reduces the need for remote access and upgrade orchestration which may introduce security gaps Configure SELinux AppArmor Using mechanisms like SELinux https github com hashicorp vault selinux policies and AppArmor can help you gain layers of security when using Vault While Vault can run on several popular operating systems Linux is recommended due to the various security primitives mentioned here Adjust user limits It is possible that your Linux distribution enforces strict process user limits ulimits Consider a review of ulimits for maximum amount of open files connections etc before going into production You might need to increase the default values to avoid errors about too many open files Be aware of special container considerations To use memory locking mlock inside a Vault container you need to use the overlayfs2 or another supporting driver |
vault page title Seal Unseal Seal Unseal sealed to lock it down layout docs A Vault must be unsealed before it can access its data Likewise it can be | ---
layout: docs
page_title: Seal/Unseal
description: >-
A Vault must be unsealed before it can access its data. Likewise, it can be
sealed to lock it down.
---
# Seal/Unseal
When a Vault server is started, it starts in a _sealed_ state. In this
state, Vault is configured to know where and how to access the physical
storage, but doesn't know how to decrypt any of it.
_Unsealing_ is the process of obtaining the plaintext root key necessary to
read the decryption key to decrypt the data, allowing access to the Vault.
Prior to unsealing, almost no operations are possible with Vault. For
example authentication, managing the mount tables, etc. are all not possible.
The only possible operations are to unseal the Vault and check the status
of the seal.
## Why?
The data stored by Vault is encrypted. Vault needs the _encryption key_ in order
to decrypt the data. The encryption key is also stored with the data
(in the _keyring_), but encrypted with another encryption key known as the _root key_.
Therefore, to decrypt the data, Vault must decrypt the encryption key
which requires the root key. Unsealing is the process of getting access to
this root key. The root key is stored alongside all other Vault data,
but is encrypted by yet another mechanism: the unseal key.
To recap: most Vault data is encrypted using the encryption key in the keyring;
the keyring is encrypted by the root key; and the root key is encrypted by
the unseal key.
## Shamir seals

The default Vault config uses a Shamir seal. Instead of distributing the unseal
key as a single key to an operator, Vault uses an algorithm known as
[Shamir's Secret Sharing](https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing)
to split the key into shares. A certain threshold of shares is required to
reconstruct the unseal key, which is then used to decrypt the root key.
This is the _unseal_ process: the shares are added one at a time (in any
order) until enough shares are present to reconstruct the key and
decrypt the root key.
## Unsealing
The unseal process is done by running `vault operator unseal` or via the API.
This process is stateful: each key can be entered via multiple mechanisms from
multiple client machines and it will work. This allows each share of the root
key to be on a distinct client machine for better security.
Note that when using the Shamir seal with multiple nodes, each node must be
unsealed with the required threshold of shares. Partial unsealing of each node
is not distributed across the cluster.
Once a Vault node is unsealed, it remains unsealed until one of these things happens:
1. It is resealed via the API (see below).
2. The server is restarted.
3. Vault's storage layer encounters an unrecoverable error.
-> **Note:** Unsealing makes the process of automating a Vault install
difficult. Automated tools can easily install, configure, and start Vault,
but unsealing it using Shamir is a very manual process. For most users
Auto Unseal will provide a better experience.
## Sealing
There is also an API to seal the Vault. This will throw away the root
key in memory and require another unseal process to restore it. Sealing
only requires a single operator with root privileges.
This way, if there is a detected intrusion, the Vault data can be locked
quickly to try to minimize damages. It can't be accessed again without
access to the root key shares.
## Auto unseal
Auto unseal was developed to aid in reducing the operational complexity of
keeping the unseal key secure. This feature delegates the responsibility of
securing the unseal key from users to a trusted device or service. At startup
Vault will connect to the device or service implementing the seal and ask it
to decrypt the root key Vault read from storage.

There are certain operations in Vault besides unsealing that
require a quorum of users to perform, e.g. generating a root token. When
using a Shamir seal the unseal keys must be provided to authorize these
operations. When using Auto Unseal these operations require _recovery
keys_ instead.
Just as the initialization process with a Shamir seal yields unseal keys,
initializing with an Auto Unseal yields recovery keys.
It is still possible to seal a Vault node using the API. In this case Vault
will remain sealed until restarted, or the unseal API is used, which with Auto
Unseal requires the recovery key fragments instead of the unseal key fragments
that would be provided with Shamir. The process remains the same.
For a list of examples and supported providers, please see the
[seal documentation](/vault/docs/configuration/seal).
When DR replication is enabled in Vault Enterprise, [Performance Standby](/vault/docs/enterprise/performance-standby) nodes on the DR cluster will seal themselves, so they must be restarted to be unsealed.
<Warning title="Recovery keys cannot decrypt the root key">
Recovery keys cannot decrypt the root key and thus are not sufficient to unseal
Vault if the auto unseal mechanism isn't working. They are purely an authorization mechanism.
Using auto unseal creates a strict Vault lifecycle dependency on the underlying seal mechanism.
This means that if the seal mechanism (such as the Cloud KMS key) becomes unavailable,
or deleted before the seal is migrated, then there is no ability to recover
access to the Vault cluster until the mechanism is available again. **If the seal
mechanism or its keys are permanently deleted, then the Vault cluster cannot be recovered, even
from backups.**
To mitigate this risk, we recommend careful controls around management of the seal
mechanism, for example using
[AWS Service Control Policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html)
or similar.
With Vault Enterprise secondary clusters (disaster or performance) can have a
seal configured independently of the primary, and when properly configured guards
against *some* of this risk. Unreplicated items such as local mounts could still
be lost.
</Warning>
## Recovery key
When Vault is initialized while using an HSM or KMS, rather than unseal keys
being returned to the operator, recovery keys are returned. These are generated
from an internal recovery key that is split via Shamir's Secret Sharing, similar
to Vault's treatment of unseal keys when running without an HSM or KMS.
Details about initialization and rekeying follow. When performing an operation
that uses recovery keys, such as `generate-root`, selection of the recovery
keys for this purpose, rather than the barrier unseal keys, is automatic.
### Initialization
When initializing, the split is performed according to the following CLI flags
and their API equivalents in the [/sys/init](/vault/api-docs/system/init) endpoint:
- `recovery-shares`: The number of shares into which to split the recovery
key. This value is equivalent to the `recovery_shares` value in the API
endpoint.
- `recovery-threshold`: The threshold of shares required to reconstruct the
recovery key. This value is equivalent to the `recovery_threshold` value in
the API endpoint.
- `recovery-pgp-keys`: The PGP keys to use to encrypt the returned recovery
key shares. This value is equivalent to the `recovery_pgp_keys` value in the
API endpoint, although as with `pgp_keys` the object in the API endpoint is
an array, not a string.
Additionally, Vault will refuse to initialize if the option has not been set to
generate a key, and no key is found. See
[Configuration](/vault/docs/configuration/seal/pkcs11) for more details.
### Rekeying
#### Unseal key
Vault's unseal key can be rekeyed using a normal `vault operator rekey`
operation from the CLI or the matching API calls. The rekey operation is
authorized by meeting the threshold of recovery keys. After rekeying, the new
barrier key is wrapped by the HSM or KMS and stored like the previous key; it is not
returned to the users that submitted their recovery keys.
<EnterpriseAlert product="vault">
Seal wrapping requires Vault Enterprise
</EnterpriseAlert>
#### Recovery key
The recovery key can be rekeyed to change the number of shares/threshold or to
target different key holders via different PGP keys. When using the Vault CLI,
this is performed by using the `-target=recovery` flag to `vault operator rekey`.
Via the API, the rekey operation is performed with the same parameters as the
[normal `/sys/rekey`
endpoint](/vault/api-docs/system/rekey); however, the
API prefix for this operation is at `/sys/rekey-recovery-key` rather than
`/sys/rekey`.
## Seal migration
The seal migration process cannot be performed without downtime, and due to the
technical underpinnings of the seal implementations, the process requires that
you briefly take the whole cluster down. While experiencing some downtime may
be unavoidable, we believe that switching seals is a rare event and that the
inconvenience of the downtime is an acceptable trade-off.
~> **NOTE**: A backup should be taken before starting seal migration in case
something goes wrong.
~> **NOTE**: Seal migration operation will require both old and new seals to be
available during the migration. For example, migration from auto unseal to Shamir
seal will require that the service backing the auto unseal is accessible during
the migration.
~> **NOTE**: Seal migration from auto unseal to auto unseal of the same type is
supported since Vault 1.6.0. However, there is a current limitation that
prevents migrating from AWSKMS to AWSKMS; all other seal migrations of the same
type are supported. Seal migration from one auto unseal type (AWS KMS) to
different auto unseal type (HSM, Azure KMS, etc.) is also supported on older
versions as well.
### Migration post Vault 1.16.0 via Seal HA for Auto Seals (Enterprise)
With Seal HA, migration between auto-unseal types (not including any Shamir
seals) can be done fully online using Seal High Availability (Seal HA) without
any downtime.
1. Edit the Vault configuration, and add the new, target seal configuration.
1. Send the Vault process the SIGHUP signal, triggering a configuration reload.
1. Monitor the [`sys/sealwrap/rewrap`](/vault/api-docs/system/sealwrap-rewrap) endpoints,
to see that rewrap is running, and/or [`sys/seal-backend-status`](/vault/api-docs/system/seal-backend-status),
endpoints, waiting for `fully_wrapped` to be true, indicating all seal wrapped values are now
wrapped by the new seal. The logs also contain information about the rewrap progress.
1. Edit the Vault configuration, removing the old seal configuration.
1. Send the Vault process the SIGHUP signal, again allowing re-wrapping to complete.
### Migration post Vault 1.5.1
These steps are common for seal migrations between any supported kinds and for
any storage backend.
1. Take a standby node down and update the [seal
configuration](/vault/docs/configuration/seal).
- If the migration is from Shamir seal to Auto seal, add the desired new Auto
seal block to the configuration.
- If the migration is from Auto seal to Shamir seal, add `disabled = "true"`
to the old seal block.
- If the migration is from Auto seal to another Auto seal, add `disabled =
"true"` to the old seal block and add the desired new Auto seal block.
Now, bring the standby node back up and run the unseal command on each key, by
supplying the `-migrate` flag.
- Supply Shamir unseal keys if the old seal was Shamir, which will be migrated
as the recovery keys for the Auto seal.
- Supply recovery keys if the old seal is one of Auto seals, which will be
migrated as the recovery keys of the new Auto seal, or as Shamir unseal
keys if the new seal is Shamir.
1. Perform step 1 for all the standby nodes, one at a time. It is necessary to
bring back the downed standby node before moving on to the other standby nodes,
specifically when Integrated Storage is in use for it helps to retain the
quorum.
1. [Step down](/vault/docs/commands/operator/step-down) the
active node. One of the standby nodes will become the new active node.
When using Integrated Storage, ensure that quorum is reached and a leader is
elected.
1. The new active node will perform the migration. Monitor the server log in
the active node to witness the completion of the seal migration process.
Wait for a little while for the migration information to replicate to all the
nodes in case of Integrated Storage. In enterprise Vault, switching an Auto seal
implies that the seal wrapped storage entries get re-wrapped. Monitor the log
and wait until this process is complete (look for `seal re-wrap completed`).
<Warning heading="Seal configuration changes will invoke rewrap">
Any change to the `seal` stanza in your Vault configuration invokes seal-rewrap,
even "migrations" from the same auto-unseal type like `pkcs11` to `pkcs11`.
</Warning>
1. Seal migration is now completed. Take down the old active node, update its
configuration to use the new seal blocks (completely unaware of the old seal type)
,and bring it back up. It will be auto-unsealed if the new seal is one of the
auto seals, or will require unseal keys if the new seal is Shamir.
1. At this point, configuration files of all the nodes can be updated to only have the
new seal information. Standby nodes can be restarted right away and the active
node can be restarted upon a leadership change.
### Migration pre 1.5.1
#### Migration from shamir to auto unseal
To migrate from Shamir keys to Auto Unseal, take your server cluster offline and
update the [seal configuration](/vault/docs/configuration/seal) with the appropriate
seal configuration. Bring your server back up and leave the rest of the nodes
offline if using multi-server mode, then run the unseal process with the
`-migrate` flag and bring the rest of the cluster online.
All unseal commands must specify the `-migrate` flag. Once the required
threshold of unseal keys are entered, unseal keys will be migrated to recovery
keys.
`$ vault operator unseal -migrate`
#### Migration from auto unseal to shamir
To migrate from auto unseal to Shamir keys, take your server cluster offline
and update the [seal configuration](/vault/docs/configuration/seal) and add `disabled
= "true"` to the seal block. This allows the migration to use this information
to decrypt the key but will not unseal Vault. When you bring your server back
up, run the unseal process with the `-migrate` flag and use the Recovery Keys
to perform the migration. All unseal commands must specify the `-migrate` flag.
Once the required threshold of recovery keys are entered, the recovery keys
will be migrated to be used as unseal keys.
#### Migration from auto unseal to auto unseal
~> **NOTE**: Migration between same Auto Unseal types is supported in Vault
1.6.0 and higher. For these pre-1.5.1 steps, it is only possible to migrate from
one type of auto unseal to a different type (ie Transit -> AWSKMS).
To migrate from auto unseal to a different auto unseal configuration, take your
server cluster offline and update the existing [seal
configuration](/vault/docs/configuration/seal) and add `disabled = "true"` to the seal
block. Then add another seal block to describe the new seal.
When you bring your server back up, run the unseal process with the `-migrate`
flag and use the Recovery Keys to perform the migration. All unseal commands
must specify the `-migrate` flag. Once the required threshold of recovery keys
are entered, the recovery keys will be kept and used as recovery keys in the new
seal.
#### Migration with integrated storage
Integrated Storage uses the Raft protocol underneath, which requires a quorum of
servers to be online before the cluster is functional. Therefore, bringing the
cluster back up one node at a time with the seal configuration updated, will not
work in this case. Follow the same steps for each kind of migration described
above with the exception that after the cluster is taken offline, update the
seal configurations of all the nodes appropriately and bring them all back up.
When the quorum of nodes are back up, Raft will elect a leader and the leader
node that will perform the migration. The migrated information will be replicated to
all other cluster peers and when the peers eventually become the leader,
migration will not happen again on the peer nodes.
## Seal high availability <EnterpriseAlert inline="true" />
Seal high availability (Seal HA) allows the configuration of more than one auto
seal mechanism such that Vault can tolerate the temporary loss of a seal service
or device for a time. With Seal HA configured with at least two and no more than
three auto seals, Vault can also start up and unseal if one of the
configured seals is still available (though Vault will remain in a degraded mode in
this case). While seals are unavailable, seal wrapping and entropy augmentation can
still occur using the remaining seals, and values produced while a seal is down will
be re-wrapped with all the seals when all seals become healthy again.
An operator should choose two seals that are unlikely to become unavailable at the
same time. For example, they may choose KMS keys in two cloud regions, from
two different providers; or a mix of HSM, KMS, or Transit seals.
When an operator configures an additional seal or removes a seal (one at a time)
and restarts Vault, Vault will automatically detect that it needs to re-wrap
CSPs and seal wrapped values, and will start the process. Seal re-wrapping can
be monitored via the logs or via the `sys/seal-status` endpoint. While a
re-wrap is in progress (or could not complete successfully), changes to the
seal configuration are not allowed.
In additional to high availability, seal HA can be used to migrate between two
auto seals in a simplified manner. To migrate in this way:
In additional to high availability, Seal HA can be used to migrate between two
auto seals in a [simplified manner.](#migration-post-vault-1-16-0-via-seal-ha-for-auto-seals-enterprise)
Note that Shamir seals are not auto seals and cannot be included in a Seal
HA setup. This is because auto seals support seal wrap while Shamir seals
do not, so the loss of the auto seal does not necessarily leave Vault in a
fully available state.
### Use and Configuration
Refer to the [configuration](/vault/docs/configuration/seal/seal-ha) section
for details on configuring Seal HA.
### Seal Re-Wrapping
Whenever seal configuration changes, Vault must re-wrap all CSPs and seal
wrapped values, to ensure each value has an entry encrypted by all configured
seals. Vault detects these configuration changes automatically, and triggers
a re-wrap. Re-wraps can take some time, depending on the number of
seal wrapped values. While re-wrapping is in progress, no configuration changes
to the seals can be made.
Progress of the re-wrap can be monitored using
the [`sys/sealwrap/rewrap`](/vault/api-docs/system/sealwrap-rewrap) endpoint.
### Limitations and Known Issues
In order to limit complexity and increase safety, there are some limitations
to the use and configuration of Seal HA:
* Vault must be configured for a single seal at the time of initialization.
Extra seals can then be added.
* Seals must be added or removed one at a time.
* Only auto seals can be used in HA configurations. Shamir and auto cannot
be mixed.
* A maximum of three seals can be configured.
* As seal wrapped values must be wrapped by all configured seals, it is possible
that large values may fail to persist as the size of the entry is multiplied by
the number of seals causing it to exceed the storage entry size limit. An example
would be storing a large document in KVv2 with seal wrapping enabled.
* It is not possible to rotate the data encryption key nor the recovery keys while
unless all seals are healthy. | vault | layout docs page title Seal Unseal description A Vault must be unsealed before it can access its data Likewise it can be sealed to lock it down Seal Unseal When a Vault server is started it starts in a sealed state In this state Vault is configured to know where and how to access the physical storage but doesn t know how to decrypt any of it Unsealing is the process of obtaining the plaintext root key necessary to read the decryption key to decrypt the data allowing access to the Vault Prior to unsealing almost no operations are possible with Vault For example authentication managing the mount tables etc are all not possible The only possible operations are to unseal the Vault and check the status of the seal Why The data stored by Vault is encrypted Vault needs the encryption key in order to decrypt the data The encryption key is also stored with the data in the keyring but encrypted with another encryption key known as the root key Therefore to decrypt the data Vault must decrypt the encryption key which requires the root key Unsealing is the process of getting access to this root key The root key is stored alongside all other Vault data but is encrypted by yet another mechanism the unseal key To recap most Vault data is encrypted using the encryption key in the keyring the keyring is encrypted by the root key and the root key is encrypted by the unseal key Shamir seals Shahir seals img vault shamir seal png The default Vault config uses a Shamir seal Instead of distributing the unseal key as a single key to an operator Vault uses an algorithm known as Shamir s Secret Sharing https en wikipedia org wiki Shamir 27s Secret Sharing to split the key into shares A certain threshold of shares is required to reconstruct the unseal key which is then used to decrypt the root key This is the unseal process the shares are added one at a time in any order until enough shares are present to reconstruct the key and decrypt the root key Unsealing The unseal process is done by running vault operator unseal or via the API This process is stateful each key can be entered via multiple mechanisms from multiple client machines and it will work This allows each share of the root key to be on a distinct client machine for better security Note that when using the Shamir seal with multiple nodes each node must be unsealed with the required threshold of shares Partial unsealing of each node is not distributed across the cluster Once a Vault node is unsealed it remains unsealed until one of these things happens 1 It is resealed via the API see below 2 The server is restarted 3 Vault s storage layer encounters an unrecoverable error Note Unsealing makes the process of automating a Vault install difficult Automated tools can easily install configure and start Vault but unsealing it using Shamir is a very manual process For most users Auto Unseal will provide a better experience Sealing There is also an API to seal the Vault This will throw away the root key in memory and require another unseal process to restore it Sealing only requires a single operator with root privileges This way if there is a detected intrusion the Vault data can be locked quickly to try to minimize damages It can t be accessed again without access to the root key shares Auto unseal Auto unseal was developed to aid in reducing the operational complexity of keeping the unseal key secure This feature delegates the responsibility of securing the unseal key from users to a trusted device or service At startup Vault will connect to the device or service implementing the seal and ask it to decrypt the root key Vault read from storage Auto Unseal img vault auto unseal png There are certain operations in Vault besides unsealing that require a quorum of users to perform e g generating a root token When using a Shamir seal the unseal keys must be provided to authorize these operations When using Auto Unseal these operations require recovery keys instead Just as the initialization process with a Shamir seal yields unseal keys initializing with an Auto Unseal yields recovery keys It is still possible to seal a Vault node using the API In this case Vault will remain sealed until restarted or the unseal API is used which with Auto Unseal requires the recovery key fragments instead of the unseal key fragments that would be provided with Shamir The process remains the same For a list of examples and supported providers please see the seal documentation vault docs configuration seal When DR replication is enabled in Vault Enterprise Performance Standby vault docs enterprise performance standby nodes on the DR cluster will seal themselves so they must be restarted to be unsealed Warning title Recovery keys cannot decrypt the root key Recovery keys cannot decrypt the root key and thus are not sufficient to unseal Vault if the auto unseal mechanism isn t working They are purely an authorization mechanism Using auto unseal creates a strict Vault lifecycle dependency on the underlying seal mechanism This means that if the seal mechanism such as the Cloud KMS key becomes unavailable or deleted before the seal is migrated then there is no ability to recover access to the Vault cluster until the mechanism is available again If the seal mechanism or its keys are permanently deleted then the Vault cluster cannot be recovered even from backups To mitigate this risk we recommend careful controls around management of the seal mechanism for example using AWS Service Control Policies https docs aws amazon com organizations latest userguide orgs manage policies scps html or similar With Vault Enterprise secondary clusters disaster or performance can have a seal configured independently of the primary and when properly configured guards against some of this risk Unreplicated items such as local mounts could still be lost Warning Recovery key When Vault is initialized while using an HSM or KMS rather than unseal keys being returned to the operator recovery keys are returned These are generated from an internal recovery key that is split via Shamir s Secret Sharing similar to Vault s treatment of unseal keys when running without an HSM or KMS Details about initialization and rekeying follow When performing an operation that uses recovery keys such as generate root selection of the recovery keys for this purpose rather than the barrier unseal keys is automatic Initialization When initializing the split is performed according to the following CLI flags and their API equivalents in the sys init vault api docs system init endpoint recovery shares The number of shares into which to split the recovery key This value is equivalent to the recovery shares value in the API endpoint recovery threshold The threshold of shares required to reconstruct the recovery key This value is equivalent to the recovery threshold value in the API endpoint recovery pgp keys The PGP keys to use to encrypt the returned recovery key shares This value is equivalent to the recovery pgp keys value in the API endpoint although as with pgp keys the object in the API endpoint is an array not a string Additionally Vault will refuse to initialize if the option has not been set to generate a key and no key is found See Configuration vault docs configuration seal pkcs11 for more details Rekeying Unseal key Vault s unseal key can be rekeyed using a normal vault operator rekey operation from the CLI or the matching API calls The rekey operation is authorized by meeting the threshold of recovery keys After rekeying the new barrier key is wrapped by the HSM or KMS and stored like the previous key it is not returned to the users that submitted their recovery keys EnterpriseAlert product vault Seal wrapping requires Vault Enterprise EnterpriseAlert Recovery key The recovery key can be rekeyed to change the number of shares threshold or to target different key holders via different PGP keys When using the Vault CLI this is performed by using the target recovery flag to vault operator rekey Via the API the rekey operation is performed with the same parameters as the normal sys rekey endpoint vault api docs system rekey however the API prefix for this operation is at sys rekey recovery key rather than sys rekey Seal migration The seal migration process cannot be performed without downtime and due to the technical underpinnings of the seal implementations the process requires that you briefly take the whole cluster down While experiencing some downtime may be unavoidable we believe that switching seals is a rare event and that the inconvenience of the downtime is an acceptable trade off NOTE A backup should be taken before starting seal migration in case something goes wrong NOTE Seal migration operation will require both old and new seals to be available during the migration For example migration from auto unseal to Shamir seal will require that the service backing the auto unseal is accessible during the migration NOTE Seal migration from auto unseal to auto unseal of the same type is supported since Vault 1 6 0 However there is a current limitation that prevents migrating from AWSKMS to AWSKMS all other seal migrations of the same type are supported Seal migration from one auto unseal type AWS KMS to different auto unseal type HSM Azure KMS etc is also supported on older versions as well Migration post Vault 1 16 0 via Seal HA for Auto Seals Enterprise With Seal HA migration between auto unseal types not including any Shamir seals can be done fully online using Seal High Availability Seal HA without any downtime 1 Edit the Vault configuration and add the new target seal configuration 1 Send the Vault process the SIGHUP signal triggering a configuration reload 1 Monitor the sys sealwrap rewrap vault api docs system sealwrap rewrap endpoints to see that rewrap is running and or sys seal backend status vault api docs system seal backend status endpoints waiting for fully wrapped to be true indicating all seal wrapped values are now wrapped by the new seal The logs also contain information about the rewrap progress 1 Edit the Vault configuration removing the old seal configuration 1 Send the Vault process the SIGHUP signal again allowing re wrapping to complete Migration post Vault 1 5 1 These steps are common for seal migrations between any supported kinds and for any storage backend 1 Take a standby node down and update the seal configuration vault docs configuration seal If the migration is from Shamir seal to Auto seal add the desired new Auto seal block to the configuration If the migration is from Auto seal to Shamir seal add disabled true to the old seal block If the migration is from Auto seal to another Auto seal add disabled true to the old seal block and add the desired new Auto seal block Now bring the standby node back up and run the unseal command on each key by supplying the migrate flag Supply Shamir unseal keys if the old seal was Shamir which will be migrated as the recovery keys for the Auto seal Supply recovery keys if the old seal is one of Auto seals which will be migrated as the recovery keys of the new Auto seal or as Shamir unseal keys if the new seal is Shamir 1 Perform step 1 for all the standby nodes one at a time It is necessary to bring back the downed standby node before moving on to the other standby nodes specifically when Integrated Storage is in use for it helps to retain the quorum 1 Step down vault docs commands operator step down the active node One of the standby nodes will become the new active node When using Integrated Storage ensure that quorum is reached and a leader is elected 1 The new active node will perform the migration Monitor the server log in the active node to witness the completion of the seal migration process Wait for a little while for the migration information to replicate to all the nodes in case of Integrated Storage In enterprise Vault switching an Auto seal implies that the seal wrapped storage entries get re wrapped Monitor the log and wait until this process is complete look for seal re wrap completed Warning heading Seal configuration changes will invoke rewrap Any change to the seal stanza in your Vault configuration invokes seal rewrap even migrations from the same auto unseal type like pkcs11 to pkcs11 Warning 1 Seal migration is now completed Take down the old active node update its configuration to use the new seal blocks completely unaware of the old seal type and bring it back up It will be auto unsealed if the new seal is one of the auto seals or will require unseal keys if the new seal is Shamir 1 At this point configuration files of all the nodes can be updated to only have the new seal information Standby nodes can be restarted right away and the active node can be restarted upon a leadership change Migration pre 1 5 1 Migration from shamir to auto unseal To migrate from Shamir keys to Auto Unseal take your server cluster offline and update the seal configuration vault docs configuration seal with the appropriate seal configuration Bring your server back up and leave the rest of the nodes offline if using multi server mode then run the unseal process with the migrate flag and bring the rest of the cluster online All unseal commands must specify the migrate flag Once the required threshold of unseal keys are entered unseal keys will be migrated to recovery keys vault operator unseal migrate Migration from auto unseal to shamir To migrate from auto unseal to Shamir keys take your server cluster offline and update the seal configuration vault docs configuration seal and add disabled true to the seal block This allows the migration to use this information to decrypt the key but will not unseal Vault When you bring your server back up run the unseal process with the migrate flag and use the Recovery Keys to perform the migration All unseal commands must specify the migrate flag Once the required threshold of recovery keys are entered the recovery keys will be migrated to be used as unseal keys Migration from auto unseal to auto unseal NOTE Migration between same Auto Unseal types is supported in Vault 1 6 0 and higher For these pre 1 5 1 steps it is only possible to migrate from one type of auto unseal to a different type ie Transit AWSKMS To migrate from auto unseal to a different auto unseal configuration take your server cluster offline and update the existing seal configuration vault docs configuration seal and add disabled true to the seal block Then add another seal block to describe the new seal When you bring your server back up run the unseal process with the migrate flag and use the Recovery Keys to perform the migration All unseal commands must specify the migrate flag Once the required threshold of recovery keys are entered the recovery keys will be kept and used as recovery keys in the new seal Migration with integrated storage Integrated Storage uses the Raft protocol underneath which requires a quorum of servers to be online before the cluster is functional Therefore bringing the cluster back up one node at a time with the seal configuration updated will not work in this case Follow the same steps for each kind of migration described above with the exception that after the cluster is taken offline update the seal configurations of all the nodes appropriately and bring them all back up When the quorum of nodes are back up Raft will elect a leader and the leader node that will perform the migration The migrated information will be replicated to all other cluster peers and when the peers eventually become the leader migration will not happen again on the peer nodes Seal high availability EnterpriseAlert inline true Seal high availability Seal HA allows the configuration of more than one auto seal mechanism such that Vault can tolerate the temporary loss of a seal service or device for a time With Seal HA configured with at least two and no more than three auto seals Vault can also start up and unseal if one of the configured seals is still available though Vault will remain in a degraded mode in this case While seals are unavailable seal wrapping and entropy augmentation can still occur using the remaining seals and values produced while a seal is down will be re wrapped with all the seals when all seals become healthy again An operator should choose two seals that are unlikely to become unavailable at the same time For example they may choose KMS keys in two cloud regions from two different providers or a mix of HSM KMS or Transit seals When an operator configures an additional seal or removes a seal one at a time and restarts Vault Vault will automatically detect that it needs to re wrap CSPs and seal wrapped values and will start the process Seal re wrapping can be monitored via the logs or via the sys seal status endpoint While a re wrap is in progress or could not complete successfully changes to the seal configuration are not allowed In additional to high availability seal HA can be used to migrate between two auto seals in a simplified manner To migrate in this way In additional to high availability Seal HA can be used to migrate between two auto seals in a simplified manner migration post vault 1 16 0 via seal ha for auto seals enterprise Note that Shamir seals are not auto seals and cannot be included in a Seal HA setup This is because auto seals support seal wrap while Shamir seals do not so the loss of the auto seal does not necessarily leave Vault in a fully available state Use and Configuration Refer to the configuration vault docs configuration seal seal ha section for details on configuring Seal HA Seal Re Wrapping Whenever seal configuration changes Vault must re wrap all CSPs and seal wrapped values to ensure each value has an entry encrypted by all configured seals Vault detects these configuration changes automatically and triggers a re wrap Re wraps can take some time depending on the number of seal wrapped values While re wrapping is in progress no configuration changes to the seals can be made Progress of the re wrap can be monitored using the sys sealwrap rewrap vault api docs system sealwrap rewrap endpoint Limitations and Known Issues In order to limit complexity and increase safety there are some limitations to the use and configuration of Seal HA Vault must be configured for a single seal at the time of initialization Extra seals can then be added Seals must be added or removed one at a time Only auto seals can be used in HA configurations Shamir and auto cannot be mixed A maximum of three seals can be configured As seal wrapped values must be wrapped by all configured seals it is possible that large values may fail to persist as the size of the entry is multiplied by the number of seals causing it to exceed the storage entry size limit An example would be storing a large document in KVv2 with seal wrapping enabled It is not possible to rotate the data encryption key nor the recovery keys while unless all seals are healthy |
vault Describes how Vault can be an OIDC identity provider OIDC provider page title OIDC Provider This document provides conceptual information about the Vault OpenID Connect OIDC identity layout docs | ---
layout: docs
page_title: OIDC Provider
description: >-
Describes how Vault can be an OIDC identity provider.
---
# OIDC provider
This document provides conceptual information about the Vault **OpenID Connect (OIDC) identity
provider** feature. This feature enables client applications that speak the OIDC protocol to
leverage Vault's source of [identity](/vault/docs/concepts/identity) and wide range of [authentication methods](/vault/docs/auth)
when authenticating end-users. For more information about the usage of Vault's OIDC provider,
refer to the [OIDC identity provider](/vault/docs/secrets/identity/oidc-provider) documentation.
## Configuration options
The next few sections of the document provide implementation details for each resource that permits Vault configuration as an OIDC identity provider.
### OIDC providers
Each Vault namespace will contain a built-in provider resource named `default`. The `default`
provider will allow all client applications within the namespace to use it for OIDC flows.
The `default` provider can be modified but not deleted.
Additionally, a Vault namespace may contain several provider resources. Each configured provider will publish the APIs listed within the [OIDC flow](/vault/docs/concepts/oidc-provider#oidc-flow) section. The APIs will be served via backend path-based routing on Vault's listen [address](/vault/docs/configuration/listener/tcp#address).
A provider has the following configuration parameters:
* **Issuer URL**: used in the `iss` claim of ID tokens
* **Allowed client IDs**: limits which clients can access the provider
* **Scopes supported**: limits what identity information is available as claims
The issuer URL parameter is necessary for the validation of ID tokens by clients. If an URL parameter is not provided explicitly, it will default to a URL with Vault's [api_addr](/vault/docs/configuration#api_addr) as the `scheme://host:port` component and `/v1/:namespace/identity/oidc/provider/:name` as the path component. This means tokens issued by a provider in a specified Vault cluster must be validated within that same cluster. If the issuer URL is provided explicitly, it must point to a Vault instance that is network-reachable by clients for ID token validation.
The allowed client IDs parameter utilizes the list of client IDs that have been generated by Vault as a part of client registration. By default, all clients will be *disallowed*. Providing `*` as the parameter value will allow all clients to use the provider.
The scopes parameter employs a list of references to named scope resources. The values provided are discoverable by the `scopes_supported` key in the OIDC discovery document of the provider. By default, a provider will have the `openid` scope available. See the scopes section below for more details on the `openid` scope.
### Scopes
Providers may reference scope resources via the `scopes_supported` parameter to make specific identity information available as claims.
A scope will have the following configuration parameters:
* **Description**: identity information captured by the scope
* **Template**: maps individual claims to Vault identity information
The template parameter takes advantage of the [JSON-based templating](/vault/api-docs/secret/identity/tokens#template) used by identity tokens for claims mapping. This means the parameter will take a JSON string of arbitrary structure where the values may be replaced with specific identity information. Template parameters that are not present for a Vault identity are omitted from the resulting claims without an error.
Example of a JSON template for a scope:
```
{
"username": ,
"contact": {
"email": ,
"phone_number":
},
"groups":
}
```
The full list of template parameters are included in the following table:
| Name | Description |
| :------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------- |
| `identity.entity.id` | The entity's ID |
| `identity.entity.name` | The entity's name |
| `identity.entity.groups.ids` | The IDs of the groups the entity is a member of |
| `identity.entity.groups.names` | The names of the groups the entity is a member of |
| `identity.entity.metadata` | Metadata associated with the entity |
| `identity.entity.metadata.<metadata key>` | Metadata associated with the entity for the given key |
| `identity.entity.aliases.<mount accessor>.id` | Entity alias ID for the given mount |
| `identity.entity.aliases.<mount accessor>.name` | Entity alias name for the given mount |
| `identity.entity.aliases.<mount accessor>.metadata` | Metadata associated with the alias for the given mount |
| `identity.entity.aliases.<mount accessor>.metadata.<metadata key>` | Metadata associated with the alias for the given mount and metadata key |
| `identity.entity.aliases.<mount accessor>.custom_metadata` | Custom metadata associated with the alias for the given mount |
| `identity.entity.aliases.<mount accessor>.custom_metadata.<custom_metadata key>` | Custom metadata associated with the alias for the given mount and custom metadata key |
| `time.now` | Current time as integral seconds since the Epoch |
| `time.now.plus.<duration>` | Current time plus a [duration format string](/vault/docs/concepts/duration-format) |
| `time.now.minus.<duration>` | Current time minus a [duration format string](/vault/docs/concepts/duration-format) |
Several named scopes can be made available on an individual provider. Note that the top-level keys in a JSON template may conflict with those in another scope. When scopes are made available on a provider, their templates are checked for top-level conflicts. A warning will be issued to the Vault operator if any conflicts are found. This may result in an error if the scopes are requested in an OIDC Authentication Request.
The `openid` scope is a unique case scope that may not be modified or deleted. The scope will exist in Vault and supported by each provider by default. The scope represents the minimum set of claims required by the OIDC specification for inclusion in ID tokens. As such, templates may not contain top-level keys that overwrite the claims populated by the openid scope.
The following defines the claims key and value mapping for the `openid` scope:
* `iss`- configured issuer of the provider
* `sub`- unique entity ID of the Vault user
* `aud`- ID of the client
* `iat`- time of token issue
* `exp`- time of token issue + ID token TTL
### Client applications
A client resource represents an application that wants to delegate end-user authentication
to Vault using the OIDC protocol. The information provided by a client resource can be used
to configure an OIDC [relying party](https://openid.net/specs/openid-connect-core-1_0.html#Terminology).
A client has the following configuration parameters:
* **Redirect URIs**: limits the valid redirect URIs in an authentication request
* **Assignments**: determine who can authenticate with the client
* **Key**: used to sign the ID tokens
* **ID token TTL**: specifies the time-to-live for ID tokens
* **Access token TTL**: specifies the time-to-live for access tokens
* **Client type**: determines the client's ability to maintain confidentiality of credentials
The `key` parameter is optional. The key will be used to sign ID tokens for the client.
It cannot be modified after creation. If not supplied, defaults to the built-in
[default key](/vault/docs/concepts/oidc-provider#keys).
A `client_id` is generated and returned after a successful client registration. The
`client_id` uniquely identifies the client. Its value will be a string with 32 random
characters from the base62 character set.
~> **Note**: At least one of the redirect URIs of a client must exactly match the `redirect_uri` parameter used in an authentication request initiated by the client.
#### Client types
A client resource has a `client_type` parameter which specifies the OAuth 2.0
[client type](https://datatracker.ietf.org/doc/html/rfc6749#section-2.1) based on
its ability to maintain confidentiality of credentials. The following sections detail
the differences between confidential and public clients in Vault.
##### Confidential
Confidential clients are capable of maintaining the confidentiality of their credentials.
Confidential clients have a `client_secret`. The `client_secret` will have a prefix of
`hvo_secret` followed by 64 random characters in the base62 character set.
Confidential clients may use Proof Key for Code Exchange ([PKCE](https://datatracker.ietf.org/doc/html/rfc7636))
during the authorization code flow.
Confidential clients must authenticate to the token endpoint using the
`client_secret_basic` or `client_secret_post` [client authentication method](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication).
##### Public
Public clients are not capable of maintaining the confidentiality of their credentials.
As such, public clients do not have a `client_secret`.
Public clients must use Proof Key for Code Exchange ([PKCE](https://datatracker.ietf.org/doc/html/rfc7636))
during the authorization code flow.
Public clients use the `none` [client authentication method](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication).
### Assignments
Assignment resources are referenced by clients via the `assignments` parameter. This parameter limits the set of Vault users allowed to authenticate. The assignments of an associated client are validated during the authentication request, ensuring that the Vault identity associated with the request is a member of the assignment's entities or groups.
Each Vault namespace will contain a built-in assignment resource named `allow_all`. The
`allow_all` assignment allows all Vault entities to authenticate through a client. The
`allow_all` assignment cannot be modified or deleted.
### Keys
Key resources are referenced by clients via the `key` parameter. This parameter specifies
the key that will be used to sign ID tokens for the client. See existing
[documentation](/vault/api-docs/secret/identity/tokens#create-a-named-key) for details on keyring
management, supported signing algorithms, rotation periods, and verification TTLs. Currently,
a key referenced by a client cannot be changed.
Each Vault namespace will contain a built-in key resource named `default`. The `default`
key can be modified but not deleted. Clients that don't specify the `key` parameter at
creation time will use the `default` key.
The `default` key will have the following configuration:
- `algorithm` - `RS256`
- `allowed_client_ids` - `*`
- `rotation_period` - `24h`
- `verification_ttl` - `24h`
## OIDC flow
~> **Note**: The Vault OIDC Provider feature currently only supports the [authorization code flow](https://openid.net/specs/openid-connect-core-1_0.html#CodeFlowAuth).
The following sections provide implementation details for the OIDC compliant APIs provided by Vault OIDC providers.
Vault OIDC providers enable registered clients to authenticate and obtain identity information (or "claims") for their end-users. They do this by providing the APIs and behavior required to satisfy the OIDC specification for the [authorization code flow](https://openid.net/specs/openid-connect-core-1_0.html#CodeFlowAuth). All clients are treated as first-party. This means that end-users will not be required to provide consent to the provider as detailed in section [3.1.2.4](https://openid.net/specs/openid-connect-core-1_0.html#Consent) of the OIDC specification. The provider will release information to clients as long as the end-user has ACL access to the provider and their identity has been authorized via an assignment.
Vault OIDC providers implement Proof Key for Code Exchange ([PKCE](https://datatracker.ietf.org/doc/html/rfc7636))
to mitigate authorization code interception attacks. PKCE is required for `public` client types
and optional for `confidential` client types.
### OpenID configuration
Each provider offers an unauthenticated endpoint that facilitates OIDC Discovery. All required metadata listed in [OpenID Provider Metadata](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata) is included in the discovery document. Additionally, the recommended `userinfo_endpoint` and `scopes_supported` metadata are included.
### Keys
Each provider offers an unauthenticated endpoint that provides the public portion of keys used to sign ID tokens. The keys are published in a JSON Web Key Set [(JWKS)](https://datatracker.ietf.org/doc/html/rfc7517) format. The keyset for an individual provider contains the keys referenced by all clients via the `allowed_client_ids` configuration parameter. A `Cache-Control` header to set based on responses, allowing clients to refresh their keys upon rotation. The `max-age` of the header is set based on the earliest rotation time of any of the keys in the keyset.
### Authorization endpoint
Each provider offers an authenticated [authorization endpoint](https://openid.net/specs/openid-connect-core-1_0.html#AuthorizationEndpoint). The authorization endpoint for each provider is added to Vault's [default policy](/vault/docs/concepts/policies#default-policy) using the `identity/oidc/provider/+/authorize` path. The endpoint incorporates all required [authentication request](https://openid.net/specs/openid-connect-core-1_0.html#AuthRequest) parameters as input.
The endpoint [validates](https://openid.net/specs/openid-connect-core-1_0.html#AuthRequestValidation) client requests and ensures that all required parameters are present and valid. The `redirect_uri` of the request is validated against the client's `redirect_uris`. The requesting Vault entity will be validated against the client's `assignments`. An appropriate [error code](https://openid.net/specs/openid-connect-core-1_0.html#AuthError) is returned for invalid requests.
An authorization code is generated with a successful validation of the request. The authorization code is single-use and cached with a lifetime of approximately 5 minutes, which mitigates the risk of leaks. A response including the original `state` presented by the client and `code` will be returned to the Vault UI which initiated the request. Vault will issue an HTTP 302 redirect to the `redirect_uri` of the request, which includes the `code` and `state` as query parameters.
### Token endpoint
Each provider will offer a [token endpoint](/vault/api-docs/secret/identity/oidc-provider#token-endpoint). The endpoint may be unauthenticated in Vault but is authenticated by requiring a `client_secret` as described in [client authentication](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). The endpoint ingests all required [token request](/vault/api-docs/secret/identity/oidc-provider#parameters-15) parameters as input. The endpoint [validates](https://openid.net/specs/openid-connect-core-1_0.html#TokenRequestValidation) the client requests and exchanges an authorization code for the ID token and access token. The cache of authorization codes will be verified against the code presented in the exchange. The appropriate [error codes](https://openid.net/specs/openid-connect-core-1_0.html#TokenErrorResponse) are returned for all invalid requests.
The ID token is generated and returned upon successful client authentication and request validation. The ID token will contain a combination of required and configurable claims. The required claims are enumerated in the scopes section above for the `openid` scope. The configurable claims are populated by templates associated with the scopes provided in the authentication request that generated the authorization code.
An access token is also generated and returned upon successful client authentication and request validation. The access token is a Vault [batch token](/vault/docs/concepts/tokens#batch-tokens) with a policy that only provides read access to the issuing provider's [userinfo endpoint](/vault/api-docs/secret/identity/oidc-provider#userinfo-endpoint). The access token is also a TTL as defined by the `access_token_ttl` of the requesting client.
### UserInfo endpoint
Each provider provides an authenticated [userinfo endpoint](/vault/api-docs/secret/identity/oidc-provider#userinfo-endpoint). The endpoint accepts the access token obtained from the token endpoint as a [bearer token](/vault/api-docs#authentication). The userinfo response is a JSON object with the `application/json` content type. The JSON object contains claims for the Vault entity associated with the access token. The claims returned are determined by the scopes requested in the authentication request that produced the access token. The `sub` claim is always returned as the entity ID in the userinfo response. | vault | layout docs page title OIDC Provider description Describes how Vault can be an OIDC identity provider OIDC provider This document provides conceptual information about the Vault OpenID Connect OIDC identity provider feature This feature enables client applications that speak the OIDC protocol to leverage Vault s source of identity vault docs concepts identity and wide range of authentication methods vault docs auth when authenticating end users For more information about the usage of Vault s OIDC provider refer to the OIDC identity provider vault docs secrets identity oidc provider documentation Configuration options The next few sections of the document provide implementation details for each resource that permits Vault configuration as an OIDC identity provider OIDC providers Each Vault namespace will contain a built in provider resource named default The default provider will allow all client applications within the namespace to use it for OIDC flows The default provider can be modified but not deleted Additionally a Vault namespace may contain several provider resources Each configured provider will publish the APIs listed within the OIDC flow vault docs concepts oidc provider oidc flow section The APIs will be served via backend path based routing on Vault s listen address vault docs configuration listener tcp address A provider has the following configuration parameters Issuer URL used in the iss claim of ID tokens Allowed client IDs limits which clients can access the provider Scopes supported limits what identity information is available as claims The issuer URL parameter is necessary for the validation of ID tokens by clients If an URL parameter is not provided explicitly it will default to a URL with Vault s api addr vault docs configuration api addr as the scheme host port component and v1 namespace identity oidc provider name as the path component This means tokens issued by a provider in a specified Vault cluster must be validated within that same cluster If the issuer URL is provided explicitly it must point to a Vault instance that is network reachable by clients for ID token validation The allowed client IDs parameter utilizes the list of client IDs that have been generated by Vault as a part of client registration By default all clients will be disallowed Providing as the parameter value will allow all clients to use the provider The scopes parameter employs a list of references to named scope resources The values provided are discoverable by the scopes supported key in the OIDC discovery document of the provider By default a provider will have the openid scope available See the scopes section below for more details on the openid scope Scopes Providers may reference scope resources via the scopes supported parameter to make specific identity information available as claims A scope will have the following configuration parameters Description identity information captured by the scope Template maps individual claims to Vault identity information The template parameter takes advantage of the JSON based templating vault api docs secret identity tokens template used by identity tokens for claims mapping This means the parameter will take a JSON string of arbitrary structure where the values may be replaced with specific identity information Template parameters that are not present for a Vault identity are omitted from the resulting claims without an error Example of a JSON template for a scope username contact email phone number groups The full list of template parameters are included in the following table Name Description identity entity id The entity s ID identity entity name The entity s name identity entity groups ids The IDs of the groups the entity is a member of identity entity groups names The names of the groups the entity is a member of identity entity metadata Metadata associated with the entity identity entity metadata metadata key Metadata associated with the entity for the given key identity entity aliases mount accessor id Entity alias ID for the given mount identity entity aliases mount accessor name Entity alias name for the given mount identity entity aliases mount accessor metadata Metadata associated with the alias for the given mount identity entity aliases mount accessor metadata metadata key Metadata associated with the alias for the given mount and metadata key identity entity aliases mount accessor custom metadata Custom metadata associated with the alias for the given mount identity entity aliases mount accessor custom metadata custom metadata key Custom metadata associated with the alias for the given mount and custom metadata key time now Current time as integral seconds since the Epoch time now plus duration Current time plus a duration format string vault docs concepts duration format time now minus duration Current time minus a duration format string vault docs concepts duration format Several named scopes can be made available on an individual provider Note that the top level keys in a JSON template may conflict with those in another scope When scopes are made available on a provider their templates are checked for top level conflicts A warning will be issued to the Vault operator if any conflicts are found This may result in an error if the scopes are requested in an OIDC Authentication Request The openid scope is a unique case scope that may not be modified or deleted The scope will exist in Vault and supported by each provider by default The scope represents the minimum set of claims required by the OIDC specification for inclusion in ID tokens As such templates may not contain top level keys that overwrite the claims populated by the openid scope The following defines the claims key and value mapping for the openid scope iss configured issuer of the provider sub unique entity ID of the Vault user aud ID of the client iat time of token issue exp time of token issue ID token TTL Client applications A client resource represents an application that wants to delegate end user authentication to Vault using the OIDC protocol The information provided by a client resource can be used to configure an OIDC relying party https openid net specs openid connect core 1 0 html Terminology A client has the following configuration parameters Redirect URIs limits the valid redirect URIs in an authentication request Assignments determine who can authenticate with the client Key used to sign the ID tokens ID token TTL specifies the time to live for ID tokens Access token TTL specifies the time to live for access tokens Client type determines the client s ability to maintain confidentiality of credentials The key parameter is optional The key will be used to sign ID tokens for the client It cannot be modified after creation If not supplied defaults to the built in default key vault docs concepts oidc provider keys A client id is generated and returned after a successful client registration The client id uniquely identifies the client Its value will be a string with 32 random characters from the base62 character set Note At least one of the redirect URIs of a client must exactly match the redirect uri parameter used in an authentication request initiated by the client Client types A client resource has a client type parameter which specifies the OAuth 2 0 client type https datatracker ietf org doc html rfc6749 section 2 1 based on its ability to maintain confidentiality of credentials The following sections detail the differences between confidential and public clients in Vault Confidential Confidential clients are capable of maintaining the confidentiality of their credentials Confidential clients have a client secret The client secret will have a prefix of hvo secret followed by 64 random characters in the base62 character set Confidential clients may use Proof Key for Code Exchange PKCE https datatracker ietf org doc html rfc7636 during the authorization code flow Confidential clients must authenticate to the token endpoint using the client secret basic or client secret post client authentication method https openid net specs openid connect core 1 0 html ClientAuthentication Public Public clients are not capable of maintaining the confidentiality of their credentials As such public clients do not have a client secret Public clients must use Proof Key for Code Exchange PKCE https datatracker ietf org doc html rfc7636 during the authorization code flow Public clients use the none client authentication method https openid net specs openid connect core 1 0 html ClientAuthentication Assignments Assignment resources are referenced by clients via the assignments parameter This parameter limits the set of Vault users allowed to authenticate The assignments of an associated client are validated during the authentication request ensuring that the Vault identity associated with the request is a member of the assignment s entities or groups Each Vault namespace will contain a built in assignment resource named allow all The allow all assignment allows all Vault entities to authenticate through a client The allow all assignment cannot be modified or deleted Keys Key resources are referenced by clients via the key parameter This parameter specifies the key that will be used to sign ID tokens for the client See existing documentation vault api docs secret identity tokens create a named key for details on keyring management supported signing algorithms rotation periods and verification TTLs Currently a key referenced by a client cannot be changed Each Vault namespace will contain a built in key resource named default The default key can be modified but not deleted Clients that don t specify the key parameter at creation time will use the default key The default key will have the following configuration algorithm RS256 allowed client ids rotation period 24h verification ttl 24h OIDC flow Note The Vault OIDC Provider feature currently only supports the authorization code flow https openid net specs openid connect core 1 0 html CodeFlowAuth The following sections provide implementation details for the OIDC compliant APIs provided by Vault OIDC providers Vault OIDC providers enable registered clients to authenticate and obtain identity information or claims for their end users They do this by providing the APIs and behavior required to satisfy the OIDC specification for the authorization code flow https openid net specs openid connect core 1 0 html CodeFlowAuth All clients are treated as first party This means that end users will not be required to provide consent to the provider as detailed in section 3 1 2 4 https openid net specs openid connect core 1 0 html Consent of the OIDC specification The provider will release information to clients as long as the end user has ACL access to the provider and their identity has been authorized via an assignment Vault OIDC providers implement Proof Key for Code Exchange PKCE https datatracker ietf org doc html rfc7636 to mitigate authorization code interception attacks PKCE is required for public client types and optional for confidential client types OpenID configuration Each provider offers an unauthenticated endpoint that facilitates OIDC Discovery All required metadata listed in OpenID Provider Metadata https openid net specs openid connect discovery 1 0 html ProviderMetadata is included in the discovery document Additionally the recommended userinfo endpoint and scopes supported metadata are included Keys Each provider offers an unauthenticated endpoint that provides the public portion of keys used to sign ID tokens The keys are published in a JSON Web Key Set JWKS https datatracker ietf org doc html rfc7517 format The keyset for an individual provider contains the keys referenced by all clients via the allowed client ids configuration parameter A Cache Control header to set based on responses allowing clients to refresh their keys upon rotation The max age of the header is set based on the earliest rotation time of any of the keys in the keyset Authorization endpoint Each provider offers an authenticated authorization endpoint https openid net specs openid connect core 1 0 html AuthorizationEndpoint The authorization endpoint for each provider is added to Vault s default policy vault docs concepts policies default policy using the identity oidc provider authorize path The endpoint incorporates all required authentication request https openid net specs openid connect core 1 0 html AuthRequest parameters as input The endpoint validates https openid net specs openid connect core 1 0 html AuthRequestValidation client requests and ensures that all required parameters are present and valid The redirect uri of the request is validated against the client s redirect uris The requesting Vault entity will be validated against the client s assignments An appropriate error code https openid net specs openid connect core 1 0 html AuthError is returned for invalid requests An authorization code is generated with a successful validation of the request The authorization code is single use and cached with a lifetime of approximately 5 minutes which mitigates the risk of leaks A response including the original state presented by the client and code will be returned to the Vault UI which initiated the request Vault will issue an HTTP 302 redirect to the redirect uri of the request which includes the code and state as query parameters Token endpoint Each provider will offer a token endpoint vault api docs secret identity oidc provider token endpoint The endpoint may be unauthenticated in Vault but is authenticated by requiring a client secret as described in client authentication https openid net specs openid connect core 1 0 html ClientAuthentication The endpoint ingests all required token request vault api docs secret identity oidc provider parameters 15 parameters as input The endpoint validates https openid net specs openid connect core 1 0 html TokenRequestValidation the client requests and exchanges an authorization code for the ID token and access token The cache of authorization codes will be verified against the code presented in the exchange The appropriate error codes https openid net specs openid connect core 1 0 html TokenErrorResponse are returned for all invalid requests The ID token is generated and returned upon successful client authentication and request validation The ID token will contain a combination of required and configurable claims The required claims are enumerated in the scopes section above for the openid scope The configurable claims are populated by templates associated with the scopes provided in the authentication request that generated the authorization code An access token is also generated and returned upon successful client authentication and request validation The access token is a Vault batch token vault docs concepts tokens batch tokens with a policy that only provides read access to the issuing provider s userinfo endpoint vault api docs secret identity oidc provider userinfo endpoint The access token is also a TTL as defined by the access token ttl of the requesting client UserInfo endpoint Each provider provides an authenticated userinfo endpoint vault api docs secret identity oidc provider userinfo endpoint The endpoint accepts the access token obtained from the token endpoint as a bearer token vault api docs authentication The userinfo response is a JSON object with the application json content type The JSON object contains claims for the Vault entity associated with the access token The claims returned are determined by the scopes requested in the authentication request that produced the access token The sub claim is always returned as the entity ID in the userinfo response |
vault Vault has the ability to integrate with OpenPGP compatible programs like GnuPG and services like Keybase io to provide an additional layer of security layout docs page title Using PGP GnuPG and Keybase when performing certain operations This page details the various PGP integrations their use and operation | ---
layout: docs
page_title: 'Using PGP, GnuPG, and Keybase'
description: |-
Vault has the ability to integrate with OpenPGP-compatible programs like
GnuPG and services like Keybase.io to provide an additional layer of security
when performing certain operations. This page details the various PGP
integrations, their use, and operation.
---
# Using PGP, GnuPG, and keybase
Vault has the ability to integrate with OpenPGP-compatible programs like GnuPG
and services like Keybase.io to provide an additional layer of security when
performing certain operations. This page details the various PGP integrations,
their use, and operation.
Keybase.io support is available only in the command-line tool and not via the
Vault HTTP API, tools that help with initialization should use the Keybase.io
API in order to obtain the PGP keys needed for a secure initialization if you
want them to use Keybase for keys.
Once the Vault has been initialized, it is possible to use Keybase to decrypt
the shards and unseal normally.
## Initializing with PGP
One of the early fundamental problems when bootstrapping and initializing Vault
was that the first user (the initializer) received a plain-text copy of all of
the unseal keys. This defeats the promises of Vault's security model, and it
also makes the distribution of those keys more difficult. Since Vault 0.3,
Vault can optionally be initialized using PGP keys. In this mode, Vault will
generate the unseal keys and then immediately encrypt them using the given
users' public PGP keys. Only the owner of the corresponding private key is then
able to decrypt the value, revealing the plain-text unseal key.
First, you must create, acquire, or import the appropriate key(s) onto the
local machine from which you are initializing Vault. This guide will not
attempt to cover all aspects of PGP keys but give examples using two popular
programs: Keybase and GnuPG.
For beginners, we suggest using [Keybase.io](https://keybase.io/) ("Keybase")
as it can be both simpler and has a number of useful behaviors and properties
around key management, such as verification of users' identities using a number
of public online sources. It also exposes the ability for users to have PGP
keys generated, stored, and managed securely on their servers. Using Vault with
Keybase will be discussed first as it is simpler.
## Initializing with keybase
To generate unseal keys for Keybase users, Vault accepts the `keybase:` prefix
to the `-pgp-keys` argument:
```shell-session
$ vault operator init -key-shares=3 -key-threshold=2 \
-pgp-keys="keybase:jefferai,keybase:vishalnayak,keybase:sethvargo"
```
This requires far fewer steps than traditional PGP (e.g. with `gpg`) because
Keybase handles a few of the tedious steps. The output will be the similar to
the following:
```
Key 1: wcBMA37rwGt6FS1VAQgAk1q8XQh6yc...
Key 2: wcBMA0wwnMXgRzYYAQgAavqbTCxZGD...
Key 3: wcFMA2DjqDb4YhTAARAAeTFyYxPmUd...
...
```
The output should be rather long in comparison to a regular unseal key. These
keys are encrypted, and only the user holding the corresponding private key can
decrypt the value. The keys are encrypted in the order in which specified in
the `-pgp-keys` attribute. As such, the keys belong to respective Keybase
accounts of `jefferai`, `vishalnayak`, and `sethvargo`. These keys can be
distributed over almost any medium, although common sense and judgement are
best advised. The encrypted keys are base64 encoded before returning.
### Unsealing with keybase
As a user, the easiest way to decrypt your unseal key is with the Keybase CLI
tool. You can download it from [Keybase.io download
page](https://keybase.io/download). After you have downloaded and configured
the Keybase CLI, you are now tasked with entering your unseal key. To get the
plain-text unseal key, you must decrypt the value given to you by the
initializer. To get the plain-text value, run the following command:
```shell-session
$ echo "wcBMA37..." | base64 --decode | keybase pgp decrypt
```
And replace `wcBMA37...` with the encrypted key.
You will be prompted to enter your Keybase passphrase. The output will be the
plain-text unseal key.
```
6ecb46277133e04b29bd0b1b05e60722dab7cdc684a0d3ee2de50ce4c38a357101
```
This is your unseal key in plain-text and should be guarded the same way you
guard a password. Now you can enter your key to the `unseal` command:
```shell-session
$ vault operator unseal
Key (will be hidden): ...
```
---
## Initializing with GnuPG
GnuPG is an open-source implementation of the OpenPGP standard and is available
on nearly every platform. For more information, please see the [GnuPG
manual](https://gnupg.org/gph/en/manual.html).
<Note>
To use ECHD keys with Vault you must use GnuPGP 2.2.21 or newer.
Refer to the [GnuPG/NEWS](https://dev.gnupg.org/source/gnupg/browse/master/NEWS) for further details.
</Note>
To create a new PGP key, run, following the prompts:
```shell-session
$ gpg --gen-key
```
To import an existing key, download the public key onto disk and run:
```shell-session
$ gpg --import key.asc
```
Once you have imported the users' public keys, you need to save their values
to disk as either base64 or binary key files. For example:
```shell-session
$ gpg --export 348FFC4C | base64 > seth.asc
```
These key files must exist on disk in base64 (the "standard" base64 character set,
without ASCII armoring) or binary. Once saved to disk, the path to these files
can be specified as an argument to the `-pgp-keys` flag.
```shell-session
$ vault operator init -key-shares=3 -key-threshold=2 \
-pgp-keys="jeff.asc,vishal.asc,seth.asc"
```
The result should look something like this:
```
Key 1: wcBMA37rwGt6FS1VAQgAk1q8XQh6yc...
Key 2: wcBMA0wwnMXgRzYYAQgAavqbTCxZGD...
Key 3: wcFMA2DjqDb4YhTAARAAeTFyYxPmUd...
...
```
The output should be rather long in comparison to a regular unseal key. These
keys are encrypted, and only the user holding the corresponding private key
can decrypt the value. The keys are encrypted in the order in which specified
in the `-pgp-keys` attribute. As such, the first key belongs to Jeff, the second
to Vishal, and the third to Seth. These keys can be distributed over almost any
medium, although common sense and judgement are best advised. The encrypted
keys are base64 encoded before returning.
### Unsealing with GnuPG
Assuming you have been given an unseal key that was encrypted using your public
PGP key, you are now tasked with entering your unseal key. To get the
plain-text unseal key, you must decrypt the value given to you by the
initializer. To get the plain-text value, run the following command:
```shell-session
$ echo "wcBMA37..." | base64 --decode | gpg -dq
```
And replace `wcBMA37...` with the encrypted key.
If you encrypted your private PGP key with a passphrase, you may be prompted to
enter it. After you enter your password, the output will be the plain-text
key:
```
6ecb46277133e04b29bd0b1b05e60722dab7cdc684a0d3ee2de50ce4c38a357101
```
This is your unseal key in plain-text and should be guarded the same way you
guard a password. Now you can enter your key to the `unseal` command:
```shell-session
$ vault operator unseal
Key (will be hidden): ...
``` | vault | layout docs page title Using PGP GnuPG and Keybase description Vault has the ability to integrate with OpenPGP compatible programs like GnuPG and services like Keybase io to provide an additional layer of security when performing certain operations This page details the various PGP integrations their use and operation Using PGP GnuPG and keybase Vault has the ability to integrate with OpenPGP compatible programs like GnuPG and services like Keybase io to provide an additional layer of security when performing certain operations This page details the various PGP integrations their use and operation Keybase io support is available only in the command line tool and not via the Vault HTTP API tools that help with initialization should use the Keybase io API in order to obtain the PGP keys needed for a secure initialization if you want them to use Keybase for keys Once the Vault has been initialized it is possible to use Keybase to decrypt the shards and unseal normally Initializing with PGP One of the early fundamental problems when bootstrapping and initializing Vault was that the first user the initializer received a plain text copy of all of the unseal keys This defeats the promises of Vault s security model and it also makes the distribution of those keys more difficult Since Vault 0 3 Vault can optionally be initialized using PGP keys In this mode Vault will generate the unseal keys and then immediately encrypt them using the given users public PGP keys Only the owner of the corresponding private key is then able to decrypt the value revealing the plain text unseal key First you must create acquire or import the appropriate key s onto the local machine from which you are initializing Vault This guide will not attempt to cover all aspects of PGP keys but give examples using two popular programs Keybase and GnuPG For beginners we suggest using Keybase io https keybase io Keybase as it can be both simpler and has a number of useful behaviors and properties around key management such as verification of users identities using a number of public online sources It also exposes the ability for users to have PGP keys generated stored and managed securely on their servers Using Vault with Keybase will be discussed first as it is simpler Initializing with keybase To generate unseal keys for Keybase users Vault accepts the keybase prefix to the pgp keys argument shell session vault operator init key shares 3 key threshold 2 pgp keys keybase jefferai keybase vishalnayak keybase sethvargo This requires far fewer steps than traditional PGP e g with gpg because Keybase handles a few of the tedious steps The output will be the similar to the following Key 1 wcBMA37rwGt6FS1VAQgAk1q8XQh6yc Key 2 wcBMA0wwnMXgRzYYAQgAavqbTCxZGD Key 3 wcFMA2DjqDb4YhTAARAAeTFyYxPmUd The output should be rather long in comparison to a regular unseal key These keys are encrypted and only the user holding the corresponding private key can decrypt the value The keys are encrypted in the order in which specified in the pgp keys attribute As such the keys belong to respective Keybase accounts of jefferai vishalnayak and sethvargo These keys can be distributed over almost any medium although common sense and judgement are best advised The encrypted keys are base64 encoded before returning Unsealing with keybase As a user the easiest way to decrypt your unseal key is with the Keybase CLI tool You can download it from Keybase io download page https keybase io download After you have downloaded and configured the Keybase CLI you are now tasked with entering your unseal key To get the plain text unseal key you must decrypt the value given to you by the initializer To get the plain text value run the following command shell session echo wcBMA37 base64 decode keybase pgp decrypt And replace wcBMA37 with the encrypted key You will be prompted to enter your Keybase passphrase The output will be the plain text unseal key 6ecb46277133e04b29bd0b1b05e60722dab7cdc684a0d3ee2de50ce4c38a357101 This is your unseal key in plain text and should be guarded the same way you guard a password Now you can enter your key to the unseal command shell session vault operator unseal Key will be hidden Initializing with GnuPG GnuPG is an open source implementation of the OpenPGP standard and is available on nearly every platform For more information please see the GnuPG manual https gnupg org gph en manual html Note To use ECHD keys with Vault you must use GnuPGP 2 2 21 or newer Refer to the GnuPG NEWS https dev gnupg org source gnupg browse master NEWS for further details Note To create a new PGP key run following the prompts shell session gpg gen key To import an existing key download the public key onto disk and run shell session gpg import key asc Once you have imported the users public keys you need to save their values to disk as either base64 or binary key files For example shell session gpg export 348FFC4C base64 seth asc These key files must exist on disk in base64 the standard base64 character set without ASCII armoring or binary Once saved to disk the path to these files can be specified as an argument to the pgp keys flag shell session vault operator init key shares 3 key threshold 2 pgp keys jeff asc vishal asc seth asc The result should look something like this Key 1 wcBMA37rwGt6FS1VAQgAk1q8XQh6yc Key 2 wcBMA0wwnMXgRzYYAQgAavqbTCxZGD Key 3 wcFMA2DjqDb4YhTAARAAeTFyYxPmUd The output should be rather long in comparison to a regular unseal key These keys are encrypted and only the user holding the corresponding private key can decrypt the value The keys are encrypted in the order in which specified in the pgp keys attribute As such the first key belongs to Jeff the second to Vishal and the third to Seth These keys can be distributed over almost any medium although common sense and judgement are best advised The encrypted keys are base64 encoded before returning Unsealing with GnuPG Assuming you have been given an unseal key that was encrypted using your public PGP key you are now tasked with entering your unseal key To get the plain text unseal key you must decrypt the value given to you by the initializer To get the plain text value run the following command shell session echo wcBMA37 base64 decode gpg dq And replace wcBMA37 with the encrypted key If you encrypted your private PGP key with a passphrase you may be prompted to enter it After you enter your password the output will be the plain text key 6ecb46277133e04b29bd0b1b05e60722dab7cdc684a0d3ee2de50ce4c38a357101 This is your unseal key in plain text and should be guarded the same way you guard a password Now you can enter your key to the unseal command shell session vault operator unseal Key will be hidden |
vault Cloud access management Vault and Boundary can be used together to provide a modern solution to remote access management in the cloud sidebar title Cloud access management layout docs page title Cloud access management | ---
layout: docs
page_title: Cloud access management
sidebar_title: Cloud access management
description: >-
Vault and Boundary can be used together to provide a modern solution to remote access management in the cloud.
---
# Cloud access management
Modern access management must be as dynamic as the infrastructure, people, and systems it serves. Traditionally, you use an IP address as the unit of security control; you use the IP as a unit of identity and manage around that, including traditional privileged access management (PAM). As you think about identity remaining a static target while the infrastructure underneath continues to be dynamic, this paradigm shift applies to simplified network topology and modern access management. As the new perimeter, [identity](https://www.hashicorp.com/resources/why-should-we-use-identity-based-security-as-we-ado) is the fundamental change agent in access management to infrastructure and resources.
This document outlines the security threats and challenges organizations encounter using traditional PAM solutions in the cloud era. It also explains why the consumption of [secrets](https://www.vaultproject.io/use-cases/secrets-management) should be independent of privileged access/session management, and why programmatic access to systems must also interact with secret management outside the traditional PAM process.
HashiCorp Vault and Boundary are security platform building blocks that can address these challenges for large, global enterprises — especially in regulated industries — creating a viable path to address modern privileged access challenges at scale.
## The traditional PAM framework
The traditional PAM framework was conceived for an era of mainframes and monolithic, on-premises infrastructure, believing that any traffic allowed inside an organization's datacenter network was safe and should be allowed broad access to resources in that network. Traditional PAM's main goal was to control elevated ("privileged") access and permissions for users, accounts, processes, and systems across an IT environment.
Traditionally, a few highly technical administrators manage PAM by accessing privileged accounts inside the datacenter. It typically takes administrators multiple days to manually onboard credentials mapping back to compute and systems across an IT environment.
<ImageConfig hideBorder caption="The traditional PAM framework.">

</ImageConfig>
The incumbent PAM process is often ticket-based (ITIL), requiring multi-person approval. After that, there is typically a manual follow-up process to rotate the credentials exposed to humans since long-lived credentials are a security and regulatory compliance risk.
In the world of multi- and hybrid-cloud, this traditional PAM framework is ineffective, leading to an exponential increase of human toil and increased risks.
## Where traditional PAM fails
Traditional PAM falls short of modern software delivery needs and security threats in two key areas.
### Dynamic and ephemeral workloads
In the era of dynamic and ephemeral workloads, a PAM process requiring significant manual intervention introduces risk and does not scale. Infrastructure as code (IaC) has become the standard for automating repeatable IT administrative tasks by building a [platform](https://www.hashicorp.com/resources/what-is-a-platform-team-and-why-do-we-need-them) where developers can go for self-service provisioning, security, networking, and deployment tasks with guardrails. Automating these processes drives cost savings through tool consolidation, time savings, and legacy system deprecation.
Traditional PAM solutions built in the era before the cloud do not fit into this new standard [cloud operating model](https://www.hashicorp.com/cloud-operating-model). The manual processes need to be faster, the frequency of human intervention invites too many potential errors, and the controls need to be granular and modular enough to meet modern security needs. They can negatively impact developer processes and workflows.
### Identity-based access management and zero trust
The need for organizations to quickly move away from the perimeter-defense-only approach (sometimes called the "castle-and-moat" defense) is becoming more urgent. The direction for many leading IT departments is to adopt an identity-based security model, where human and application access is gated using identity through trusted identity providers rather than outmoded identifiers like IP addresses (the traditional approach). The National Institute for Standards and Technology (NIST) recommends shifting to identity-based segmentation instead of network segmentation, as workloads, users, data, and credentials change often.
Similarly, modern best practices encourage the adoption of a [zero-trust architecture](https://www.hashicorp.com/solutions/zero-trust-security). According to NIST:
> "Zero trust architecture is an end-to-end approach to enterprise resource and data security encompassing **identity** (person and **nonperson entities**), credentials, access management, operations, endpoints, hosting environments, and the interconnecting infrastructure." - [NIST 800-207](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207.pdf).
NIST's position on how non-human entities authenticate themselves in an enterprise implementing a zero-trust architecture is an open issue.
> "The associated risk is that an attacker will be able to **induce** or coerce an NPE (non-person entities) to perform some task that the attacker is not privileged to perform. There is also a risk that an attacker could access a software agent's **credentials** and **impersonate the agent** when performing tasks." - [NIST 800-207](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207.pdf).
NIST's concerns are likely based on poor implementation of non-human authentication. At organizations trying to move away from traditional PAM, a common challenge is automating their credential rotation process and making it less cumbersome to rotate frequently.
Solving this issue is essential because long-lived secrets in any environment can lead to [credential stuffing](https://owasp.org/www-community/attacks/Credential_stuffing); the automated injection of stolen credentials to fraudulently gain access to user accounts costs large organizations more than [$2 million](https://money.cnn.com/2018/03/18/technology/biometrics-workplace/index.html) annually in remediating actions. It can take 10.5 months to detect and identify credential-stuffing activities.
## Solutions to the traditional PAM challenges
Based on the challenges and security risks of traditional PAM, a modern replacement must meet several requirements:
- Automation and versioned "as code" configuration for access and secrets management controls
- Multi-cloud compatibility
- Identity-based access controls facilitated by an identity broker with secrets or workload identity
- Automated secrets rotation, or in some cases, single-use, just-in-time-generated credentials
Let's explore the last two requirements in more detail.
### Workload identity for identity-based access
A [workload identity](https://learn.microsoft.com/en-us/entra/workload-id/workload-identities-overview) is an identity you assign to a software workload to authenticate and access other services and resources; it's something you need for your software entity to authenticate with some system.
According to a [Microsoft blog post](https://blog.identitydigest.com/azuread-federate-k8s/): "workload identity is a new capability that allows you to get rid of secrets in several scenarios."
While using secrets for workload and machine identity is fine as organizations modernize their PAM, they can be compromised in credential stuffing attacks, as mentioned by NIST. It is why more solutions are using major cloud providers and their platforms as identity providers to generate workload identities (sometimes called "machine identities") as an alternative to using secrets for identity.
Workload identity sits on a framework where you configure trust relationships between two platforms, establishing a hardened, verifiable identity per workload. Workload and machine identity attestation at the platform removes the risk of impersonation for non-person entities.
Many enterprises leverage an identity broker such as HashiCorp Vault to authenticate applications against a trusted source of identity and then leverage that identity to control access to data, systems, shared services, and secrets. An identity broker creates an opportunity to aggregate multiple sources of identity and present them as a single entity to target platforms; applying policy to that entity is vastly simplified.
### Just-in-time credentials
One of the basic principles of data security is the principle of least privilege, which reduces risk by allowing only specific privileges for specific purposes. However, standing privileges easily violate this principle — account privileges are always available, even when not needed — providing a perpetually available attack surface. Standing accounts increase the threat of data exposure, and managing privileged access with many accounts, many of which belong to machines rather than human users, becomes more challenging.
Zero standing privilege means no long-lived credentials are statically stored anywhere. Temporary credentials are provided in flight (ideally in memory) and just in time; this is a crucial strength of dynamic secrets because it generates ephemeral, extremely short-lived credentials in flight when invoking a request for a secret.
Short-lived credentials created just in time avoid credential reuse and potential leaks. Boundary integrates with Vault to leverage its dynamic secrets support to enable that pattern, where short-lived credentials are created upon access, and destroyed after the session is complete. Applying fine-grained role-based access control with this technique enables a least privileged approach. Dynamic generation lets organizations attribute each credential to a single interactive and non-interactive session, making auditing more straightforward and robust.
When managing machine access to secrets, the dynamic nature of HashiCorp Vault comes to the forefront. Vault gives each service access to secrets based on its identity and associated policy.
HashiCorp Vault natively supports several secret engines, including
- Google Cloud secrets engine
- Azure secrets engine
- AWS secrets engine
- Kubernetes secrets engine
- SSH secrets engine
- Databases secrets engine (MySQL, Postgres, SQL-Server, MongoDB, etc.)
- PKI secrets engine
Combining multiple authentication sources and secret engines can provide controlled access within various implementations.
With HashiCorp Vault, whether a user is looking to create and distribute organizational secrets and access or applications are looking to retrieve new database credentials every 15 minutes, centrally managing this access based on trusted identities is critical.
HashiCorp Vault has successfully altered the market's perception of managing secrets across multiple platforms and identity providers. Security in a dynamic world requires a dramatic shift from the approaches common in the static world. Instead of wrapping security around static servers and applications, it must be dynamically woven among the different components and tightly coupled with trusted identities and policies. With Vault, organizations can leverage any trusted source of identity to enforce access to systems and secrets.
<ImageConfig hideBorder caption="HashiCorp Boundary leveraging Vault authentication and secrets.">

</ImageConfig>
## Modern PAM for the cloud
[AWS recommends](https://aws.amazon.com/blogs/security/temporary-elevated-access-management-with-iam-identity-center/) using automation where possible to keep people away from systems — yet not every action can be automated in practice, and some operations might require access by human users. So, the need for a PAM (or just access management) process continues to be justified. In addition, governance mandates session recording of all privileged interactions. Today, most privileged interactive sessions can be programmatically conducted, limiting privileged interactive sessions to emergency P1 incidents.
Adopting a modern access management solution purpose-built for the cloud is essential in today's cloud-centric landscape. Automated onboarding of services is a critical component in a modern PAM solution, especially in highly dynamic and multi-cloud environments. Such a solution empowers organizations to streamline access management, enhance operational efficiency, ensure higher identity assurance, and strengthen security and compliance measures.
[HashiCorp Boundary](https://www.hashicorp.com/products/boundary) is part of the HashiCorp suite of tools for managing identity-based access for modern, dynamic infrastructure. Boundary allows a single workflow to facilitate interactive human sessions for privileged and non-privileged accounts while providing a local development experience. It leverages Vault's identity brokering and dynamic credentials capability to underpin the modern PAM paradigm.
HashiCorp's approach focuses on five core principles to enable modern PAM, centered on identity-based controls in cloud-driven environments:
1. Authentication and authorization
1. Time-bound, least-privileged access
1. Automation and flexible deployment
1. Streamlined DevOps workflow
1. Auditing and logging
HashiCorp Boundary allows a single workflow to facilitate interactive human sessions for privileged and non-privileged accounts while providing a local development experience. Boundary's workflow layers security controls and integrations on multiple levels, monitoring and managing user access through activities aligned with the five core principles:
- Tightly scoped identity-based permissions
- "Just-in-time" network and credential access for sessions via HashiCorp Vault
- Single sign-on to target services and applications via external identity providers
- Automated discovery of target systems
- Session monitoring and management
- SSH session recording
The above activities align with the five core principles articulated earlier.
<ImageConfig hideBorder caption="HashiCorp Boundary going full circle, leveraging the ecosystem, including Vault.">

</ImageConfig>
HashiCorp Boundary includes automated controls to facilitate the onboarding of services via HashiCorp Terraform for preconfigured security policies or dynamic host catalogs, which automatically discover and onboard new or changed infrastructure resources and their connection information, such as Amazon EC2 hosts and Microsoft Azure virtual machines. This workflow automates onboarding new or altered infrastructure resources and their connection information. Automated onboarding of applications and infrastructure leveraging IaC significantly reduces administrative management and operational toil, accelerating the integration of secure access to infrastructure and services.
HashiCorp has been [named](https://www.hashicorp.com/blog/hashicorp-enters-gartner-pam-mq) for the first time in the 2023 Gartner® Magic Quadrant™ for Privileged Access Management (PAM). We feel that HashiCorp's approach has influenced our inclusion in the Magic Quadrant based on what we see as the five essential principles for modern PAM.
Gartner noted HashiCorp's solution combining HashiCorp Boundary and HashiCorp Vault. These two products can be used to solve new challenges around PAM utilizing the cloud; this was born from developing world-class capabilities around a specific set of modern core use cases focused on **[workflows, not technologies](https://www.hashicorp.com/tao-of-hashicorp)**.
## Conclusion
The cloud-native era demands a revolutionary shift to a dynamic PAM solution, unencumbered by legacy tooling. HashiCorp Vault and Boundary Enterprise are well-situated to address this paradigm shift for large, global enterprises in regulated industries. This creates a viable path to manage an organization's privileged access challenges at scale.
Given the pace of change in the industry, now is the time for enterprises to begin evaluating by experimentation, steered by the goal of streamlining dynamic access management. It is an opportunity to collaborate across the organization and discover consumption patterns conducive to streamlined developer workflows and a modern shared responsibility security model. | vault | layout docs page title Cloud access management sidebar title Cloud access management description Vault and Boundary can be used together to provide a modern solution to remote access management in the cloud Cloud access management Modern access management must be as dynamic as the infrastructure people and systems it serves Traditionally you use an IP address as the unit of security control you use the IP as a unit of identity and manage around that including traditional privileged access management PAM As you think about identity remaining a static target while the infrastructure underneath continues to be dynamic this paradigm shift applies to simplified network topology and modern access management As the new perimeter identity https www hashicorp com resources why should we use identity based security as we ado is the fundamental change agent in access management to infrastructure and resources This document outlines the security threats and challenges organizations encounter using traditional PAM solutions in the cloud era It also explains why the consumption of secrets https www vaultproject io use cases secrets management should be independent of privileged access session management and why programmatic access to systems must also interact with secret management outside the traditional PAM process HashiCorp Vault and Boundary are security platform building blocks that can address these challenges for large global enterprises especially in regulated industries creating a viable path to address modern privileged access challenges at scale The traditional PAM framework The traditional PAM framework was conceived for an era of mainframes and monolithic on premises infrastructure believing that any traffic allowed inside an organization s datacenter network was safe and should be allowed broad access to resources in that network Traditional PAM s main goal was to control elevated privileged access and permissions for users accounts processes and systems across an IT environment Traditionally a few highly technical administrators manage PAM by accessing privileged accounts inside the datacenter It typically takes administrators multiple days to manually onboard credentials mapping back to compute and systems across an IT environment ImageConfig hideBorder caption The traditional PAM framework Diagram describing traditional PAM framework img diagram cloud access traditional png ImageConfig The incumbent PAM process is often ticket based ITIL requiring multi person approval After that there is typically a manual follow up process to rotate the credentials exposed to humans since long lived credentials are a security and regulatory compliance risk In the world of multi and hybrid cloud this traditional PAM framework is ineffective leading to an exponential increase of human toil and increased risks Where traditional PAM fails Traditional PAM falls short of modern software delivery needs and security threats in two key areas Dynamic and ephemeral workloads In the era of dynamic and ephemeral workloads a PAM process requiring significant manual intervention introduces risk and does not scale Infrastructure as code IaC has become the standard for automating repeatable IT administrative tasks by building a platform https www hashicorp com resources what is a platform team and why do we need them where developers can go for self service provisioning security networking and deployment tasks with guardrails Automating these processes drives cost savings through tool consolidation time savings and legacy system deprecation Traditional PAM solutions built in the era before the cloud do not fit into this new standard cloud operating model https www hashicorp com cloud operating model The manual processes need to be faster the frequency of human intervention invites too many potential errors and the controls need to be granular and modular enough to meet modern security needs They can negatively impact developer processes and workflows Identity based access management and zero trust The need for organizations to quickly move away from the perimeter defense only approach sometimes called the castle and moat defense is becoming more urgent The direction for many leading IT departments is to adopt an identity based security model where human and application access is gated using identity through trusted identity providers rather than outmoded identifiers like IP addresses the traditional approach The National Institute for Standards and Technology NIST recommends shifting to identity based segmentation instead of network segmentation as workloads users data and credentials change often Similarly modern best practices encourage the adoption of a zero trust architecture https www hashicorp com solutions zero trust security According to NIST Zero trust architecture is an end to end approach to enterprise resource and data security encompassing identity person and nonperson entities credentials access management operations endpoints hosting environments and the interconnecting infrastructure NIST 800 207 https nvlpubs nist gov nistpubs SpecialPublications NIST SP 800 207 pdf NIST s position on how non human entities authenticate themselves in an enterprise implementing a zero trust architecture is an open issue The associated risk is that an attacker will be able to induce or coerce an NPE non person entities to perform some task that the attacker is not privileged to perform There is also a risk that an attacker could access a software agent s credentials and impersonate the agent when performing tasks NIST 800 207 https nvlpubs nist gov nistpubs SpecialPublications NIST SP 800 207 pdf NIST s concerns are likely based on poor implementation of non human authentication At organizations trying to move away from traditional PAM a common challenge is automating their credential rotation process and making it less cumbersome to rotate frequently Solving this issue is essential because long lived secrets in any environment can lead to credential stuffing https owasp org www community attacks Credential stuffing the automated injection of stolen credentials to fraudulently gain access to user accounts costs large organizations more than 2 million https money cnn com 2018 03 18 technology biometrics workplace index html annually in remediating actions It can take 10 5 months to detect and identify credential stuffing activities Solutions to the traditional PAM challenges Based on the challenges and security risks of traditional PAM a modern replacement must meet several requirements Automation and versioned as code configuration for access and secrets management controls Multi cloud compatibility Identity based access controls facilitated by an identity broker with secrets or workload identity Automated secrets rotation or in some cases single use just in time generated credentials Let s explore the last two requirements in more detail Workload identity for identity based access A workload identity https learn microsoft com en us entra workload id workload identities overview is an identity you assign to a software workload to authenticate and access other services and resources it s something you need for your software entity to authenticate with some system According to a Microsoft blog post https blog identitydigest com azuread federate k8s workload identity is a new capability that allows you to get rid of secrets in several scenarios While using secrets for workload and machine identity is fine as organizations modernize their PAM they can be compromised in credential stuffing attacks as mentioned by NIST It is why more solutions are using major cloud providers and their platforms as identity providers to generate workload identities sometimes called machine identities as an alternative to using secrets for identity Workload identity sits on a framework where you configure trust relationships between two platforms establishing a hardened verifiable identity per workload Workload and machine identity attestation at the platform removes the risk of impersonation for non person entities Many enterprises leverage an identity broker such as HashiCorp Vault to authenticate applications against a trusted source of identity and then leverage that identity to control access to data systems shared services and secrets An identity broker creates an opportunity to aggregate multiple sources of identity and present them as a single entity to target platforms applying policy to that entity is vastly simplified Just in time credentials One of the basic principles of data security is the principle of least privilege which reduces risk by allowing only specific privileges for specific purposes However standing privileges easily violate this principle account privileges are always available even when not needed providing a perpetually available attack surface Standing accounts increase the threat of data exposure and managing privileged access with many accounts many of which belong to machines rather than human users becomes more challenging Zero standing privilege means no long lived credentials are statically stored anywhere Temporary credentials are provided in flight ideally in memory and just in time this is a crucial strength of dynamic secrets because it generates ephemeral extremely short lived credentials in flight when invoking a request for a secret Short lived credentials created just in time avoid credential reuse and potential leaks Boundary integrates with Vault to leverage its dynamic secrets support to enable that pattern where short lived credentials are created upon access and destroyed after the session is complete Applying fine grained role based access control with this technique enables a least privileged approach Dynamic generation lets organizations attribute each credential to a single interactive and non interactive session making auditing more straightforward and robust When managing machine access to secrets the dynamic nature of HashiCorp Vault comes to the forefront Vault gives each service access to secrets based on its identity and associated policy HashiCorp Vault natively supports several secret engines including Google Cloud secrets engine Azure secrets engine AWS secrets engine Kubernetes secrets engine SSH secrets engine Databases secrets engine MySQL Postgres SQL Server MongoDB etc PKI secrets engine Combining multiple authentication sources and secret engines can provide controlled access within various implementations With HashiCorp Vault whether a user is looking to create and distribute organizational secrets and access or applications are looking to retrieve new database credentials every 15 minutes centrally managing this access based on trusted identities is critical HashiCorp Vault has successfully altered the market s perception of managing secrets across multiple platforms and identity providers Security in a dynamic world requires a dramatic shift from the approaches common in the static world Instead of wrapping security around static servers and applications it must be dynamically woven among the different components and tightly coupled with trusted identities and policies With Vault organizations can leverage any trusted source of identity to enforce access to systems and secrets ImageConfig hideBorder caption HashiCorp Boundary leveraging Vault authentication and secrets Diagram showing HashiCorp Boundary leveraging Vault authentication and secrets img diagram cloud access boundary vault png ImageConfig Modern PAM for the cloud AWS recommends https aws amazon com blogs security temporary elevated access management with iam identity center using automation where possible to keep people away from systems yet not every action can be automated in practice and some operations might require access by human users So the need for a PAM or just access management process continues to be justified In addition governance mandates session recording of all privileged interactions Today most privileged interactive sessions can be programmatically conducted limiting privileged interactive sessions to emergency P1 incidents Adopting a modern access management solution purpose built for the cloud is essential in today s cloud centric landscape Automated onboarding of services is a critical component in a modern PAM solution especially in highly dynamic and multi cloud environments Such a solution empowers organizations to streamline access management enhance operational efficiency ensure higher identity assurance and strengthen security and compliance measures HashiCorp Boundary https www hashicorp com products boundary is part of the HashiCorp suite of tools for managing identity based access for modern dynamic infrastructure Boundary allows a single workflow to facilitate interactive human sessions for privileged and non privileged accounts while providing a local development experience It leverages Vault s identity brokering and dynamic credentials capability to underpin the modern PAM paradigm HashiCorp s approach focuses on five core principles to enable modern PAM centered on identity based controls in cloud driven environments 1 Authentication and authorization 1 Time bound least privileged access 1 Automation and flexible deployment 1 Streamlined DevOps workflow 1 Auditing and logging HashiCorp Boundary allows a single workflow to facilitate interactive human sessions for privileged and non privileged accounts while providing a local development experience Boundary s workflow layers security controls and integrations on multiple levels monitoring and managing user access through activities aligned with the five core principles Tightly scoped identity based permissions Just in time network and credential access for sessions via HashiCorp Vault Single sign on to target services and applications via external identity providers Automated discovery of target systems Session monitoring and management SSH session recording The above activities align with the five core principles articulated earlier ImageConfig hideBorder caption HashiCorp Boundary going full circle leveraging the ecosystem including Vault Diagram showing HashiCorp Boundary going full circle leveraging the ecosystem including Vault img diagram cloud access full circle png ImageConfig HashiCorp Boundary includes automated controls to facilitate the onboarding of services via HashiCorp Terraform for preconfigured security policies or dynamic host catalogs which automatically discover and onboard new or changed infrastructure resources and their connection information such as Amazon EC2 hosts and Microsoft Azure virtual machines This workflow automates onboarding new or altered infrastructure resources and their connection information Automated onboarding of applications and infrastructure leveraging IaC significantly reduces administrative management and operational toil accelerating the integration of secure access to infrastructure and services HashiCorp has been named https www hashicorp com blog hashicorp enters gartner pam mq for the first time in the 2023 Gartner Magic Quadrant for Privileged Access Management PAM We feel that HashiCorp s approach has influenced our inclusion in the Magic Quadrant based on what we see as the five essential principles for modern PAM Gartner noted HashiCorp s solution combining HashiCorp Boundary and HashiCorp Vault These two products can be used to solve new challenges around PAM utilizing the cloud this was born from developing world class capabilities around a specific set of modern core use cases focused on workflows not technologies https www hashicorp com tao of hashicorp Conclusion The cloud native era demands a revolutionary shift to a dynamic PAM solution unencumbered by legacy tooling HashiCorp Vault and Boundary Enterprise are well situated to address this paradigm shift for large global enterprises in regulated industries This creates a viable path to manage an organization s privileged access challenges at scale Given the pace of change in the industry now is the time for enterprises to begin evaluating by experimentation steered by the goal of streamlining dynamic access management It is an opportunity to collaborate across the organization and discover consumption patterns conducive to streamlined developer workflows and a modern shared responsibility security model |
vault Identity page title Identity This document contains conceptual information about Identity along with an Vault provides an identity management solution to maintain clients who are recognized by Vault layout docs | ---
layout: docs
page_title: 'Identity'
description: >-
Vault provides an identity management solution to maintain clients who are recognized by Vault.
---
# Identity
This document contains conceptual information about **Identity** along with an
overview of the various terminologies and their concepts. The idea of Identity
is to maintain the clients who are recognized by Vault. As such, Vault provides
an identity management solution through the **Identity secrets engine**. For
more information about the Identity secrets engine and how it is used, refer to
the [Identity Secrets Engine](/vault/docs/secrets/identity) documentation.
## Entities and aliases
Each user may have multiple accounts with various identity providers, and Vault
supports many of those providers to authenticate with Vault. Vault Identity can
tie authentications from various auth methods to a single representation. This representation of a consolidated identity is called an **Entity** and their
corresponding accounts with authentication providers can be mapped as
**Aliases**. In essence, each entity is made up of zero or more aliases. An entity cannot have more than one alias for
a particular authentication backend.
For example, a user with accounts in both GitHub and LDAP can be mapped to a
single entity in Vault with two aliases, one of type GitHub and one of type
LDAP.

However, if both aliases are created on the same auth mount, such as
a Github mount, both aliases cannot be mapped to the same entity. The aliases can
have the same auth type, as long as the auth mounts are different, and
still be associated to the same entity. The diagrams below illustrate both valid
and invalid scenarios.


When a client authenticates via any credential backend (except the Token
backend), Vault creates a new entity. It attaches a new alias to it if a
corresponding entity does not already exist. The entity identifier will be tied
to the authenticated token. When such tokens are used, their entity identifiers
are audit logged, marking a trail of actions performed by specific users.
~> Vault Entity is used to count the number of Vault clients. To learn more
about client count, refer to the [Client Count](/vault/docs/concepts/client-count)
documentation.
## Entity management
Entities in Vault **do not** automatically pull identity information from
anywhere. It needs to be explicitly managed by operators. This way, it is
flexible in terms of administratively controlling the number of entities to be
synced against Vault. In some sense, Vault will serve as a _cache_ of
identities and not as a _source_ of identities.
## Entity policies
Vault policies can be assigned to entities which will grant _additional_
permissions to the token on top of the existing policies on the token. If the
token presented on the API request contains an identifier for the entity and if
that entity has a set of policies on it, then the token will be capable of
performing actions allowed by the policies on the entity as well.

This is a paradigm shift in terms of _when_ the policies of the token get
evaluated. Before identity, the policy names on the token were immutable (not
the contents of those policies though). But with entity policies, along with
the immutable set of policy names on the token, the evaluation of policies
applicable to the token through its identity will happen at request time. This
also adds enormous flexibility to control the behavior of already issued
tokens.
It is important to note that the policies on the entity are only a means to grant
_additional_ capabilities and not a replacement for the policies on the token.
To know the full set of capabilities of the token with an associated entity
identifier, the policies on the token should be taken into account.
~> **NOTE:** Be careful in granting permissions to non-readonly identity endpoints.
If a user can modify an entity, they can grant it additional privileges through
policies. If a user can modify an alias they can login with, they can bind it to
an entity with higher privileges. If a user can modify group membership, they
can add their entity to a group with higher privileges.
## Mount bound aliases
Vault supports multiple authentication backends and also allows enabling the
same type of authentication backend on different mount paths. The alias name of
the user will be unique within the backend's mount. But identity store needs to
uniquely distinguish between conflicting alias names across different mounts of
these identity providers. Hence, the alias name in combination with the
authentication backend mount's accessor, serve as the unique identifier of an
alias.
The table below shows what information each of the supported auth methods uses
to form the alias name. This is the identifying information that is used to match or create
an entity. If no entities are explicitly created or merged, then one [entity will be implicitly created](#implicit-entities)
for each object on the right-hand side of the table, when it is used to authenticate on
a particular auth mount point.
| Auth method | Name reported by auth method |
| ------------------- | --------------------------------------------------------------------------------------------------- |
| AliCloud | Principal ID |
| AppRole | Role ID |
| AWS IAM | Configurable via `iam_alias` to one of: Role ID (default), IAM unique ID, Canonical ARN, Full ARN |
| AWS EC2 | Configurable via `ec2_alias` to one of: Role ID (default), EC2 instance ID, AMI ID |
| Azure | Subject (from JWT claim) |
| Cloud Foundry | App ID |
| GitHub | User login name associated with token |
| Google Cloud | Configurable via `iam_alias` to one of: Role ID (default), Service account unique ID |
| JWT/OIDC | Configurable via `user_claim` to one of the presented claims (no default value) |
| Kerberos | Username |
| Kubernetes | Configurable via `alias_name_source` to one of: Service account UID (default), Service account name |
| LDAP | Username |
| OCI | Role name |
| Okta | Username |
| RADIUS | Username |
| TLS Certificate | Subject CommonName |
| Token | `entity_alias`, if provided |
| Username (userpass) | Username |
## Local auth methods
**Vault Enterprise:** All the auth methods will generate an entity by default
when a token is being issued, with the exception of token store. This is applicable
for both mounts that are shared between clusters and cluster local auth mounts (using `local=true`)
when Vault replication is in use.
If the goal of marking an auth method as `local` was to comply to GDPR guidelines,
then care must be taken to not set the data pertaining to local auth mount or local auth
mount aliases in the metadata of the associated entity.
## Implicit entities
Operators can create entities for all the users of an auth mount beforehand and
assign policies to them, so that when users login, the desired capabilities to
the tokens via entities are already assigned. But if that's not done, upon a
successful user login from any of the authentication backends, Vault will
create a new entity and assign an alias against the login that was successful.
Note that the tokens created using the token authentication backend will not
normally have any associated identity information. An existing or new implicit
entity can be assigned by using the `entity_alias` parameter, when creating a
token using a token role with a configured list of `allowed_entity_aliases`.
## Identity auditing
If the token used to make API calls has an associated entity identifier, it
will be audit logged as well. This leaves a trail of actions performed by
specific users.
## Identity groups
Vault identity has support for **groups**. A group can contain multiple entities
as its members. A group can also have subgroups. Policies set on the group are
granted to all members of the group. During request time, when the token's
entity ID is being evaluated for the policies that it has access to, policies
that are inherited due to group memberships are granted along with the policies
on the entity itself.

## Group hierarchical permissions
Entities can be direct members of groups, in which case they inherit the
policies of the groups they belong to. Entities can also be indirect members of
groups. For example, if a GroupA has GroupB as subgroup, then members of GroupB
are indirect members of GroupA. Hence, the members of GroupB will have access
to policies on both GroupA and GroupB.
## External vs internal groups
By default, the groups created in identity store are called **internal groups**.
The membership management of these groups should be carried out
manually.
A group can also be created as an **external group**. In this case, the
entity membership in the group is managed semi-automatically. An external group
serves as a mapping to a group that is outside of the identity store. External
groups can have one (and only one) alias. This alias should map to a notion of
a group that is outside of the identity store.
For example, groups in LDAP and teams in GitHub.
A username in LDAP belonging to a group in LDAP can get its
entity ID added as a member of a group in Vault automatically during _logins_
and _token renewals_. This works only if the group in Vault is an external
group and has an alias that maps to the group in LDAP.
~> **NOTE:** If the user is removed from the group in LDAP, the user will
not immediately be removed from the external group in Vault. The group
membership change will be reflected in Vault only upon the
subsequent **login** or **renewal** operation.
For information about Identity Secrets Engine, refer to [Identity Secrets Engine](/vault/docs/secrets/identity).
## Tutorial
Refer to the [Identity: Entities and
Groups](/vault/tutorials/auth-methods/identity) tutorial to learn how Vault supports mutliple authentication methods and enables the same authentication method to be used with different mount paths. | vault | layout docs page title Identity description Vault provides an identity management solution to maintain clients who are recognized by Vault Identity This document contains conceptual information about Identity along with an overview of the various terminologies and their concepts The idea of Identity is to maintain the clients who are recognized by Vault As such Vault provides an identity management solution through the Identity secrets engine For more information about the Identity secrets engine and how it is used refer to the Identity Secrets Engine vault docs secrets identity documentation Entities and aliases Each user may have multiple accounts with various identity providers and Vault supports many of those providers to authenticate with Vault Vault Identity can tie authentications from various auth methods to a single representation This representation of a consolidated identity is called an Entity and their corresponding accounts with authentication providers can be mapped as Aliases In essence each entity is made up of zero or more aliases An entity cannot have more than one alias for a particular authentication backend For example a user with accounts in both GitHub and LDAP can be mapped to a single entity in Vault with two aliases one of type GitHub and one of type LDAP Entity overview img vault identity doc 1 png However if both aliases are created on the same auth mount such as a Github mount both aliases cannot be mapped to the same entity The aliases can have the same auth type as long as the auth mounts are different and still be associated to the same entity The diagrams below illustrate both valid and invalid scenarios Valid Alias Mapping img vault identity doc 4 png Invalid Alias Mapping img vault identity doc 5 png When a client authenticates via any credential backend except the Token backend Vault creates a new entity It attaches a new alias to it if a corresponding entity does not already exist The entity identifier will be tied to the authenticated token When such tokens are used their entity identifiers are audit logged marking a trail of actions performed by specific users Vault Entity is used to count the number of Vault clients To learn more about client count refer to the Client Count vault docs concepts client count documentation Entity management Entities in Vault do not automatically pull identity information from anywhere It needs to be explicitly managed by operators This way it is flexible in terms of administratively controlling the number of entities to be synced against Vault In some sense Vault will serve as a cache of identities and not as a source of identities Entity policies Vault policies can be assigned to entities which will grant additional permissions to the token on top of the existing policies on the token If the token presented on the API request contains an identifier for the entity and if that entity has a set of policies on it then the token will be capable of performing actions allowed by the policies on the entity as well Entity policies img vault identity doc 2 png This is a paradigm shift in terms of when the policies of the token get evaluated Before identity the policy names on the token were immutable not the contents of those policies though But with entity policies along with the immutable set of policy names on the token the evaluation of policies applicable to the token through its identity will happen at request time This also adds enormous flexibility to control the behavior of already issued tokens It is important to note that the policies on the entity are only a means to grant additional capabilities and not a replacement for the policies on the token To know the full set of capabilities of the token with an associated entity identifier the policies on the token should be taken into account NOTE Be careful in granting permissions to non readonly identity endpoints If a user can modify an entity they can grant it additional privileges through policies If a user can modify an alias they can login with they can bind it to an entity with higher privileges If a user can modify group membership they can add their entity to a group with higher privileges Mount bound aliases Vault supports multiple authentication backends and also allows enabling the same type of authentication backend on different mount paths The alias name of the user will be unique within the backend s mount But identity store needs to uniquely distinguish between conflicting alias names across different mounts of these identity providers Hence the alias name in combination with the authentication backend mount s accessor serve as the unique identifier of an alias The table below shows what information each of the supported auth methods uses to form the alias name This is the identifying information that is used to match or create an entity If no entities are explicitly created or merged then one entity will be implicitly created implicit entities for each object on the right hand side of the table when it is used to authenticate on a particular auth mount point Auth method Name reported by auth method AliCloud Principal ID AppRole Role ID AWS IAM Configurable via iam alias to one of Role ID default IAM unique ID Canonical ARN Full ARN AWS EC2 Configurable via ec2 alias to one of Role ID default EC2 instance ID AMI ID Azure Subject from JWT claim Cloud Foundry App ID GitHub User login name associated with token Google Cloud Configurable via iam alias to one of Role ID default Service account unique ID JWT OIDC Configurable via user claim to one of the presented claims no default value Kerberos Username Kubernetes Configurable via alias name source to one of Service account UID default Service account name LDAP Username OCI Role name Okta Username RADIUS Username TLS Certificate Subject CommonName Token entity alias if provided Username userpass Username Local auth methods Vault Enterprise All the auth methods will generate an entity by default when a token is being issued with the exception of token store This is applicable for both mounts that are shared between clusters and cluster local auth mounts using local true when Vault replication is in use If the goal of marking an auth method as local was to comply to GDPR guidelines then care must be taken to not set the data pertaining to local auth mount or local auth mount aliases in the metadata of the associated entity Implicit entities Operators can create entities for all the users of an auth mount beforehand and assign policies to them so that when users login the desired capabilities to the tokens via entities are already assigned But if that s not done upon a successful user login from any of the authentication backends Vault will create a new entity and assign an alias against the login that was successful Note that the tokens created using the token authentication backend will not normally have any associated identity information An existing or new implicit entity can be assigned by using the entity alias parameter when creating a token using a token role with a configured list of allowed entity aliases Identity auditing If the token used to make API calls has an associated entity identifier it will be audit logged as well This leaves a trail of actions performed by specific users Identity groups Vault identity has support for groups A group can contain multiple entities as its members A group can also have subgroups Policies set on the group are granted to all members of the group During request time when the token s entity ID is being evaluated for the policies that it has access to policies that are inherited due to group memberships are granted along with the policies on the entity itself Identity overview img vault identity doc 3 png Group hierarchical permissions Entities can be direct members of groups in which case they inherit the policies of the groups they belong to Entities can also be indirect members of groups For example if a GroupA has GroupB as subgroup then members of GroupB are indirect members of GroupA Hence the members of GroupB will have access to policies on both GroupA and GroupB External vs internal groups By default the groups created in identity store are called internal groups The membership management of these groups should be carried out manually A group can also be created as an external group In this case the entity membership in the group is managed semi automatically An external group serves as a mapping to a group that is outside of the identity store External groups can have one and only one alias This alias should map to a notion of a group that is outside of the identity store For example groups in LDAP and teams in GitHub A username in LDAP belonging to a group in LDAP can get its entity ID added as a member of a group in Vault automatically during logins and token renewals This works only if the group in Vault is an external group and has an alias that maps to the group in LDAP NOTE If the user is removed from the group in LDAP the user will not immediately be removed from the external group in Vault The group membership change will be reflected in Vault only upon the subsequent login or renewal operation For information about Identity Secrets Engine refer to Identity Secrets Engine vault docs secrets identity Tutorial Refer to the Identity Entities and Groups vault tutorials auth methods identity tutorial to learn how Vault supports mutliple authentication methods and enables the same authentication method to be used with different mount paths |
vault sidebar title Storage layout docs Storage Vault relies on external storage to save its durable information page title Storage | ---
layout: docs
page_title: Storage
sidebar_title: Storage
description: >-
Vault relies on external storage to save its durable information.
---
# Storage
As described on our [Architecture](/vault/docs/internals/architecture) page, Vault's
storage backend is untrusted storage used purely to keep encrypted information.
## Supported storage backends
@include 'ent-supported-storage.mdx'
Many other options for storage are available with community support for Vault - see our
[Storage Configuration](/vault/docs/configuration/storage) section for more
information.
-> **Choosing a storage backend:** Refer to the [integrated storage vs. external
storage](/vault/docs/configuration/storage#integrated-storage-vs-external-storage)
section of the storage configuration page to help make a decision about which
storage backend to use.
## Backups
Due to the highly flexible nature of Vault's potential storage configurations,
providing exact guidance on backing up Vault is challenging.
When backing up Vault, there are two pieces to consider:
1. Vault's encrypted data in the storage backend
2. Configuration files and management scripts for running the Vault server
There's also a big question - what is the error case you're trying to guard
against by saving a backup?
### The big question - why take backups?
It's important to consider the question of "why take a backup" while developing
your ongoing backup and disaster recovery strategy.
Taking a backup is recommended prior to upgrades, as downgrading Vault storage
is not always possible. Generally, a backup is recommended any time a major
change is planned for a cluster.
More specifically, we recommend taking backups **before**, but not during, write
operations to the `/sys` API (excluding the `/sys/leases`, `/sys/namespaces`,
`/sys/tools`, `/sys/wrapping`, `/sys/policies`, and `/sys/pprof` endpoints).
Some examples of workflows that write to the `/sys` API are upgrades and rekeys.
In the future, this guidance may change for the Integrated Storage backend.
Backups _can_ also help with accidental data deletions or modifications. In
this case, the story can get a little tricky. If you simply recover a backup
from 5AM with the correct data, but the current time is 10AM, you will lose data
written between 5 and 10AM. Lucy Davinhart gave a HashiConf talk that serves as
an interesting [case
study](https://www.hashicorp.com/resources/oh-no-i-deleted-my-vault-secret).
We do not recommend backups as protection against the failure of an individual
machine. Vault servers can run in clusters, so to protect against server
failure, we recommend running Vault in [HA
mode](/vault/docs/internals/high-availability). With community features, a
Vault cluster can extend across multiple availability zones within a region.
Vault Enterprise supports replicated clusters and disaster recovery for data
center failure. When using Vault Community Edition in [HA
Mode](/vault/docs/internals/high-availability), a backup can help guard against the
failure of a data center.
Ultimately, backups are not a replacement for running in HA, or for using
replication with Vault Enterprise. As you develop a plan for recovering from or
guarding against failure, you should consider both backups and HA as critical
components of that plan.
### Backing up vault's persisted data
Backups and restores are ideally performed while Vault is offline. If offline
backups are not feasible, we recommend using a storage backend that supports
atomic snapshots (such as
[Consul](/consul/commands/snapshot) or [Integrated
Storage](/vault/docs/commands/operator/raft#snapshot)).
~> If your storage backend does not support atomic snapshots, we recommend only
taking offline backups.
To perform a backup or restore of Vault's encrypted data when using a
HashiCorp-supported storage backend, see the instructions linked below. For
other storage backends, follow the documentation of that backend for taking and
restoring backups.
- Integrated Storage [snapshots](/vault/docs/commands/operator/raft#snapshot)
- Consul [snapshots](/consul/commands/snapshot)
#### Backing up multiple clusters
If you are using Vault Enterprise [Performance
Replication](/vault/docs/enterprise/replication#performance-replication-and-disaster-recovery-dr-replication),
you should plan to take backups of the active node on each of your clusters.
### Configuration
In addition to backing up Vault's encrypted data via the storage backend, you
may also wish to save the server configuration files, any scripts for managing
the Vault service, and ensure you can reinstall any user-installed plugins. The
location of these files will be specific to your installation of Vault.
~> **NOTE**: Although a backup or snapshot of Vault's data from the storage
backend is encrypted, some of your configuration may be sensitive (a Vault token
for Transit Autounseal or a TLS private key in your configuration, for example).
The presence of this information in your backups will mean that they may need
to be carefully protected. | vault | layout docs page title Storage sidebar title Storage description Vault relies on external storage to save its durable information Storage As described on our Architecture vault docs internals architecture page Vault s storage backend is untrusted storage used purely to keep encrypted information Supported storage backends include ent supported storage mdx Many other options for storage are available with community support for Vault see our Storage Configuration vault docs configuration storage section for more information Choosing a storage backend Refer to the integrated storage vs external storage vault docs configuration storage integrated storage vs external storage section of the storage configuration page to help make a decision about which storage backend to use Backups Due to the highly flexible nature of Vault s potential storage configurations providing exact guidance on backing up Vault is challenging When backing up Vault there are two pieces to consider 1 Vault s encrypted data in the storage backend 2 Configuration files and management scripts for running the Vault server There s also a big question what is the error case you re trying to guard against by saving a backup The big question why take backups It s important to consider the question of why take a backup while developing your ongoing backup and disaster recovery strategy Taking a backup is recommended prior to upgrades as downgrading Vault storage is not always possible Generally a backup is recommended any time a major change is planned for a cluster More specifically we recommend taking backups before but not during write operations to the sys API excluding the sys leases sys namespaces sys tools sys wrapping sys policies and sys pprof endpoints Some examples of workflows that write to the sys API are upgrades and rekeys In the future this guidance may change for the Integrated Storage backend Backups can also help with accidental data deletions or modifications In this case the story can get a little tricky If you simply recover a backup from 5AM with the correct data but the current time is 10AM you will lose data written between 5 and 10AM Lucy Davinhart gave a HashiConf talk that serves as an interesting case study https www hashicorp com resources oh no i deleted my vault secret We do not recommend backups as protection against the failure of an individual machine Vault servers can run in clusters so to protect against server failure we recommend running Vault in HA mode vault docs internals high availability With community features a Vault cluster can extend across multiple availability zones within a region Vault Enterprise supports replicated clusters and disaster recovery for data center failure When using Vault Community Edition in HA Mode vault docs internals high availability a backup can help guard against the failure of a data center Ultimately backups are not a replacement for running in HA or for using replication with Vault Enterprise As you develop a plan for recovering from or guarding against failure you should consider both backups and HA as critical components of that plan Backing up vault s persisted data Backups and restores are ideally performed while Vault is offline If offline backups are not feasible we recommend using a storage backend that supports atomic snapshots such as Consul consul commands snapshot or Integrated Storage vault docs commands operator raft snapshot If your storage backend does not support atomic snapshots we recommend only taking offline backups To perform a backup or restore of Vault s encrypted data when using a HashiCorp supported storage backend see the instructions linked below For other storage backends follow the documentation of that backend for taking and restoring backups Integrated Storage snapshots vault docs commands operator raft snapshot Consul snapshots consul commands snapshot Backing up multiple clusters If you are using Vault Enterprise Performance Replication vault docs enterprise replication performance replication and disaster recovery dr replication you should plan to take backups of the active node on each of your clusters Configuration In addition to backing up Vault s encrypted data via the storage backend you may also wish to save the server configuration files any scripts for managing the Vault service and ensure you can reinstall any user installed plugins The location of these files will be specific to your installation of Vault NOTE Although a backup or snapshot of Vault s data from the storage backend is encrypted some of your configuration may be sensitive a Vault token for Transit Autounseal or a TLS private key in your configuration for example The presence of this information in your backups will mean that they may need to be carefully protected |
vault High availability mode HA against outages layout docs page title High Availability Vault can be highly available allowing you to run multiple Vaults to protect | ---
layout: docs
page_title: High Availability
description: >-
Vault can be highly available, allowing you to run multiple Vaults to protect
against outages.
---
# High availability mode (HA)
Vault supports a multi-server mode for high availability. This mode protects
against outages by running multiple Vault servers. High availability mode
is automatically enabled when using a data store that supports it.
You can tell if a data store supports high availability mode ("HA") by starting
the server and seeing if "(HA available)" is output next to the data store
information. If it is, then Vault will automatically use HA mode. This
information is also available on the
[Configuration](/vault/docs/configuration) page.
To be highly available, one of the Vault server nodes grabs a lock within the
data store. The successful server node then becomes the active node; all other
nodes become standby nodes. At this point, if the standby nodes receive a
request, they will either [forward the request](#request-forwarding) or
[redirect the client](#client-redirection) depending on the current
configuration and state of the cluster -- see the sections below for details.
Due to this architecture, HA does not enable increased scalability. In general,
the bottleneck of Vault is the data store itself, not Vault core. For example:
to increase the scalability of Vault with Consul, you would generally scale
Consul instead of Vault.
Certain storage backends can support high availability mode, which enable them
to store both Vault's information in addition to the HA lock. However, Vault
also supports split data/HA mode, whereby the lock value and the rest of the
data live separately. This can be done by specifying both the
[`storage`](/vault/docs/configuration#storage) and
[`ha_storage`](/vault/docs/configuration#ha_storage) stanzas in the configuration file
with different backends. For instance, a Vault cluster can be set up to use
Consul as the [`ha_storage`](/vault/docs/configuration#ha_storage) to manage the lock,
and use Amazon S3 as the [`storage`](/vault/docs/configuration#storage) for all other
persisted data.
The sections below explain the server communication patterns and each type of
request handling in more detail. At a minimum, the requirements for redirection
mode must be met for an HA cluster to work successfully.
## Server-to-Server communication
Both methods of request handling rely on the active node advertising
information about itself to the other nodes. Rather than over the network, this
communication takes place within Vault's encrypted storage; the active node
writes this information and unsealed standby Vault nodes can read it.
For the client redirection method, this is the extent of server-to-server
communication -- no direct communication with only encrypted entries in the
data store used to transfer state.
For the request forwarding method, the servers need direct communication with
each other. In order to perform this securely, the active node also advertises,
via the encrypted data store entry, a newly-generated private key (ECDSA-P521)
and a newly-generated self-signed certificate designated for client and server
authentication. Each standby uses the private key and certificate to open a
mutually-authenticated TLS 1.2 connection to the active node via the advertised
cluster address. When client requests come in, the requests are serialized,
sent over this TLS-protected communication channel, and acted upon by the
active node. The active node then returns a response to the standby, which
sends the response back to the requesting client.
## Request forwarding
If request forwarding is enabled (turned on by default in 0.6.2), clients can
still force the older/fallback redirection behavior (see below) if desired by
setting the `X-Vault-No-Request-Forwarding` header to any non-empty value.
Successful cluster setup requires a few configuration parameters, although some
can be automatically determined.
## Client redirection
If `X-Vault-No-Request-Forwarding` header in the request is set to a non-empty
value, the standby nodes will redirect the client using a `307` status code to
the _active node's_ redirect address.
This is also the fallback method used when request forwarding is turned off or
there is an error performing the forwarding. As such, a redirect address is
always required for all HA setups.
Some HA data store drivers can autodetect the redirect address, but it is often
necessary to configure it manually via a top-level value in the configuration
file. The key for this value is [`api_addr`](/vault/docs/configuration#api_addr) and
the value can also be specified by the `VAULT_API_ADDR` environment variable,
which takes precedence.
What the [`api_addr`](/vault/docs/configuration#api_addr) value should be set to
depends on how Vault is set up. There are two common scenarios: Vault servers
accessed directly by clients, and Vault servers accessed via a load balancer.
In both cases, the [`api_addr`](/vault/docs/configuration#api_addr) should be a full
URL including scheme (`http`/`https`), not simply an IP address and port.
### Direct access
When clients are able to access Vault directly, the
[`api_addr`](/vault/docs/configuration#api_addr) for each node should be that node's
address. For instance, if there are two Vault nodes:
- `A`, accessed via `https://a.vault.mycompany.com:8200`
- `B`, accessed via `https://b.vault.mycompany.com:8200`
Then node `A` would set its
[`api_addr`](/vault/docs/configuration#api_addr) to
`https://a.vault.mycompany.com:8200` and node `B` would set its
[`api_addr`](/vault/docs/configuration#api_addr) to
`https://b.vault.mycompany.com:8200`.
This way, when `A` is the active node, any requests received by node `B` will
cause it to redirect the client to node `A`'s
[`api_addr`](/vault/docs/configuration#api_addr) at `https://a.vault.mycompany.com`,
and vice-versa.
### Behind load balancers
Sometimes clients use load balancers as an initial method to access one of the
Vault servers, but actually have direct access to each Vault node. In this
case, the Vault servers should actually be set up as described in the above
section, since for redirection purposes the clients have direct access.
However, if the only access to the Vault servers is via the load balancer, the
[`api_addr`](/vault/docs/configuration#api_addr) on each node should be the same: the
address of the load balancer. Clients that reach a standby node will be
redirected back to the load balancer; at that point hopefully the load
balancer's configuration will have been updated to know the address of the
current leader. This can cause a redirect loop and as such is not a recommended
setup when it can be avoided.
### Per-Node cluster listener addresses
Each [`listener`](/vault/docs/configuration/listener) block in Vault's configuration
file contains an [`address`](/vault/docs/configuration/listener/tcp#address) value on
which Vault listens for requests. Similarly, each
[`listener`](/vault/docs/configuration/listener) block can contain a
[`cluster_address`](/vault/docs/configuration/listener/tcp#cluster_address) on which
Vault listens for server-to-server cluster requests. If this value is not set,
its IP address will be automatically set to same as the
[`address`](/vault/docs/configuration/listener/tcp#address) value, and its port will
be automatically set to the same as the
[`address`](/vault/docs/configuration/listener/tcp#address) value plus one (so by
default, port `8201`).
Note that _only_ active nodes have active listeners. When a node becomes active
it will start cluster listeners, and when it becomes standby it will stop them.
### Per-Node cluster address
Similar to the [`api_addr`](/vault/docs/configuration#api_addr),
[`cluster_addr`](/vault/docs/configuration#cluster_addr) is the value that each node,
if active, should advertise to the standbys to use for server-to-server
communications, and lives as a top-level value in the configuration file. On
each node, this should be set to a host name or IP address that a standby can
use to reach one of that node's
[`cluster_address`](/vault/docs/configuration#cluster_address) values set in the
[`listener`](/vault/docs/configuration/listener) blocks, including port. (Note that
this will always be forced to `https` since only TLS connections are used
between servers.)
This value can also be specified by the `VAULT_CLUSTER_ADDR` environment
variable, which takes precedence.
## Storage support
Currently there are several storage backends that support high availability
mode, including [Consul](/vault/docs/configuration/storage/consul),
[ZooKeeper](/vault/docs/configuration/storage/zookeeper) and [etcd](/vault/docs/configuration/storage/etcd). These may
change over time, and the [configuration page](/vault/docs/configuration) should be
referenced.
HashiCorp recommends [Vault Integrated Storage](/vault/docs/configuration/storage/raft) as the default HA backend for new deployments of Vault. [Consul Storage Backend](/vault/docs/configuration/storage/consul) is also a supported option and used by many production deployments. See the [comparison chart](/vault/docs/configuration/storage#integrated-storage-vs-consul-as-vault-storage) for help deciding which option is best for you.
If you're interested in implementing another backend or adding HA support to
another backend, we'd love your contributions. Adding HA support requires
implementing the
[`physical.HABackend`](https://pkg.go.dev/github.com/hashicorp/vault/sdk/physical#HABackend)
interface for the storage backend. | vault | layout docs page title High Availability description Vault can be highly available allowing you to run multiple Vaults to protect against outages High availability mode HA Vault supports a multi server mode for high availability This mode protects against outages by running multiple Vault servers High availability mode is automatically enabled when using a data store that supports it You can tell if a data store supports high availability mode HA by starting the server and seeing if HA available is output next to the data store information If it is then Vault will automatically use HA mode This information is also available on the Configuration vault docs configuration page To be highly available one of the Vault server nodes grabs a lock within the data store The successful server node then becomes the active node all other nodes become standby nodes At this point if the standby nodes receive a request they will either forward the request request forwarding or redirect the client client redirection depending on the current configuration and state of the cluster see the sections below for details Due to this architecture HA does not enable increased scalability In general the bottleneck of Vault is the data store itself not Vault core For example to increase the scalability of Vault with Consul you would generally scale Consul instead of Vault Certain storage backends can support high availability mode which enable them to store both Vault s information in addition to the HA lock However Vault also supports split data HA mode whereby the lock value and the rest of the data live separately This can be done by specifying both the storage vault docs configuration storage and ha storage vault docs configuration ha storage stanzas in the configuration file with different backends For instance a Vault cluster can be set up to use Consul as the ha storage vault docs configuration ha storage to manage the lock and use Amazon S3 as the storage vault docs configuration storage for all other persisted data The sections below explain the server communication patterns and each type of request handling in more detail At a minimum the requirements for redirection mode must be met for an HA cluster to work successfully Server to Server communication Both methods of request handling rely on the active node advertising information about itself to the other nodes Rather than over the network this communication takes place within Vault s encrypted storage the active node writes this information and unsealed standby Vault nodes can read it For the client redirection method this is the extent of server to server communication no direct communication with only encrypted entries in the data store used to transfer state For the request forwarding method the servers need direct communication with each other In order to perform this securely the active node also advertises via the encrypted data store entry a newly generated private key ECDSA P521 and a newly generated self signed certificate designated for client and server authentication Each standby uses the private key and certificate to open a mutually authenticated TLS 1 2 connection to the active node via the advertised cluster address When client requests come in the requests are serialized sent over this TLS protected communication channel and acted upon by the active node The active node then returns a response to the standby which sends the response back to the requesting client Request forwarding If request forwarding is enabled turned on by default in 0 6 2 clients can still force the older fallback redirection behavior see below if desired by setting the X Vault No Request Forwarding header to any non empty value Successful cluster setup requires a few configuration parameters although some can be automatically determined Client redirection If X Vault No Request Forwarding header in the request is set to a non empty value the standby nodes will redirect the client using a 307 status code to the active node s redirect address This is also the fallback method used when request forwarding is turned off or there is an error performing the forwarding As such a redirect address is always required for all HA setups Some HA data store drivers can autodetect the redirect address but it is often necessary to configure it manually via a top level value in the configuration file The key for this value is api addr vault docs configuration api addr and the value can also be specified by the VAULT API ADDR environment variable which takes precedence What the api addr vault docs configuration api addr value should be set to depends on how Vault is set up There are two common scenarios Vault servers accessed directly by clients and Vault servers accessed via a load balancer In both cases the api addr vault docs configuration api addr should be a full URL including scheme http https not simply an IP address and port Direct access When clients are able to access Vault directly the api addr vault docs configuration api addr for each node should be that node s address For instance if there are two Vault nodes A accessed via https a vault mycompany com 8200 B accessed via https b vault mycompany com 8200 Then node A would set its api addr vault docs configuration api addr to https a vault mycompany com 8200 and node B would set its api addr vault docs configuration api addr to https b vault mycompany com 8200 This way when A is the active node any requests received by node B will cause it to redirect the client to node A s api addr vault docs configuration api addr at https a vault mycompany com and vice versa Behind load balancers Sometimes clients use load balancers as an initial method to access one of the Vault servers but actually have direct access to each Vault node In this case the Vault servers should actually be set up as described in the above section since for redirection purposes the clients have direct access However if the only access to the Vault servers is via the load balancer the api addr vault docs configuration api addr on each node should be the same the address of the load balancer Clients that reach a standby node will be redirected back to the load balancer at that point hopefully the load balancer s configuration will have been updated to know the address of the current leader This can cause a redirect loop and as such is not a recommended setup when it can be avoided Per Node cluster listener addresses Each listener vault docs configuration listener block in Vault s configuration file contains an address vault docs configuration listener tcp address value on which Vault listens for requests Similarly each listener vault docs configuration listener block can contain a cluster address vault docs configuration listener tcp cluster address on which Vault listens for server to server cluster requests If this value is not set its IP address will be automatically set to same as the address vault docs configuration listener tcp address value and its port will be automatically set to the same as the address vault docs configuration listener tcp address value plus one so by default port 8201 Note that only active nodes have active listeners When a node becomes active it will start cluster listeners and when it becomes standby it will stop them Per Node cluster address Similar to the api addr vault docs configuration api addr cluster addr vault docs configuration cluster addr is the value that each node if active should advertise to the standbys to use for server to server communications and lives as a top level value in the configuration file On each node this should be set to a host name or IP address that a standby can use to reach one of that node s cluster address vault docs configuration cluster address values set in the listener vault docs configuration listener blocks including port Note that this will always be forced to https since only TLS connections are used between servers This value can also be specified by the VAULT CLUSTER ADDR environment variable which takes precedence Storage support Currently there are several storage backends that support high availability mode including Consul vault docs configuration storage consul ZooKeeper vault docs configuration storage zookeeper and etcd vault docs configuration storage etcd These may change over time and the configuration page vault docs configuration should be referenced HashiCorp recommends Vault Integrated Storage vault docs configuration storage raft as the default HA backend for new deployments of Vault Consul Storage Backend vault docs configuration storage consul is also a supported option and used by many production deployments See the comparison chart vault docs configuration storage integrated storage vs consul as vault storage for help deciding which option is best for you If you re interested in implementing another backend or adding HA support to another backend we d love your contributions Adding HA support requires implementing the physical HABackend https pkg go dev github com hashicorp vault sdk physical HABackend interface for the storage backend |
vault introduced in Vault 0 8 and may not be available in earlier releases Response wrapping layout docs page title Response Wrapping Wrapping responses in cubbyholes for secure distribution Note Some of this information relies on features of response wrapping tokens | ---
layout: docs
page_title: Response Wrapping
description: Wrapping responses in cubbyholes for secure distribution.
---
# Response wrapping
_Note_: Some of this information relies on features of response-wrapping tokens
introduced in Vault 0.8 and may not be available in earlier releases.
## Overview
In many Vault deployments, clients can access Vault directly and consume
returned secrets. In other situations, it may make sense to or be desired to
separate privileges such that one trusted entity is responsible for interacting
with most of the Vault API and passing secrets to the end consumer.
However, the more relays a secret travels through, the more possibilities for
accidental disclosure, especially if the secret is being transmitted in
plaintext. For instance, you may wish to get a TLS private key to a machine
that has been cold-booted, but since you do not want to store a decryption key
in persistent storage, you cannot encrypt this key in transit.
To help address this problem, Vault includes a feature called _response
wrapping_. When requested, Vault can take the response it would have sent to an
HTTP client and instead insert it into the
[`cubbyhole`](/vault/docs/secrets/cubbyhole) of a single-use token,
returning that single-use token instead.
Logically speaking, the response is
wrapped by the token, and retrieving it requires an unwrap operation against
this token. Functionally speaking, the token provides authorization to use
an encryption key from Vault's keyring to decrypt the data.
This provides a powerful mechanism for information sharing in many
environments. In the types of scenarios, described above, often the best
practical option is to provide _cover_ for the secret information, be able to
_detect malfeasance_ (interception, tampering), and limit _lifetime_ of the
secret's exposure. Response wrapping performs all three of these duties:
- It provides _cover_ by ensuring that the value being transmitted across the
wire is not the actual secret but a reference to such a secret, namely the
response-wrapping token. Information stored in logs or captured along the
way do not directly see the sensitive information.
- It provides _malfeasance detection_ by ensuring that only a single party can
ever unwrap the token and see what's inside. A client receiving a token that
cannot be unwrapped can trigger an immediate security incident. In addition,
a client can inspect a given token before unwrapping to ensure that its
origin is from the expected location in Vault.
- It _limits the lifetime_ of secret exposure because the response-wrapping
token has a lifetime that is separate from the wrapped secret (and often can
be much shorter), so if a client fails to come up and unwrap the token, the
token can expire very quickly.
## Response-Wrapping tokens
When a response is wrapped, the normal API response from Vault does not contain
the original secret, but rather contains a set of information related to the
response-wrapping token:
- TTL: The TTL of the response-wrapping token itself
- Token: The actual token value
- Creation Time: The time that the response-wrapping token was created
- Creation Path: The API path that was called in the original request
- Wrapped Accessor: If the wrapped response is an authentication response
containing a Vault token, this is the value of the wrapped token's accessor.
This is useful for orchestration systems (such as Nomad) to be able to control
the lifetime of secrets based on their knowledge of the lifetime of jobs,
without having to actually unwrap the response-wrapping token or gain
knowledge of the token ID inside.
Vault currently does not provide signed response-wrapping tokens, as it
provides little extra protection. If you are being pointed to the correct Vault
server, token validation is performed by interacting with the server itself; a
signed token does not remove the need to validate the token with the server,
since the token is not carrying data but merely an access mechanism and the
server will not release data without validating it. If you are being attacked
and pointed to the wrong Vault server, the same attacker could trivially give
you the wrong signing public key that corresponds to the wrong Vault server.
You could cache a previously valid key, but could also cache a previously valid
address (and in most cases the Vault address will not change or will be set via
a service discovery mechanism). As such, we rely on the fact that the token
itself is not carrying authoritative data and do not sign it.
## Response-Wrapping token operations
Via the `sys/wrapping` path, several operations can be run against wrapping
tokens:
- Lookup (`sys/wrapping/lookup`): This allows fetching the response-wrapping
token's creation time, creation path, and TTL. This path is unauthenticated
and available to response-wrapping tokens themselves. In other words, a
response-wrapping token holder wishing to perform validation is always
allowed to look up the properties of the token.
- Unwrap (`sys/wrapping/unwrap`): Unwrap the token, returning the response
inside. The response that is returned will be the original wire-format
response; it can be used directly with API clients.
- Rewrap (`sys/wrapping/rewrap`): Allows migrating the wrapped data to a new
response-wrapping token. This can be useful for long-lived secrets. For
example, an organization may wish (or be required in a compliance scenario)
to have the `pki` backend's root CA key be returned in a long-lived
response-wrapping token to ensure that nobody has seen the key (easily
verified by performing lookups on the response-wrapping token) but available
for signing CRLs in case they ever accidentally change or lose the `pki`
mount. Often, compliance schemes require periodic rotation of secrets, so
this helps achieve that compliance goal without actually exposing what's
inside.
- Wrap (`sys/wrapping/wrap`): A helper endpoint that echoes back the data sent
to it in a response-wrapping token. Note that blocking access to this
endpoint does not remove the ability for arbitrary data to be wrapped, as it
can be done elsewhere in Vault.
## Response-Wrapping token creation
Response wrapping is per-request and is triggered by providing to Vault the
desired TTL for a response-wrapping token for that request. This is set by the
client using the `X-Vault-Wrap-TTL` header and can be either an integer number
of seconds or a string duration of seconds (`15s`), minutes (`20m`), or hours
(`25h`). When using the Vault CLI, you can set this via the `-wrap-ttl`
parameter. When using the Go API, wrapping is triggered by [setting a helper
function](https://godoc.org/github.com/hashicorp/vault/api#Client.SetWrappingLookupFunc)
that tells the API the conditions under which to request wrapping, by mapping
an operation and path to a desired TTL.
If a client requests wrapping:
1. The original HTTP response is serialized
2. A new single-use token is generated with the TTL supplied by the client
3. Internally, the original serialized response is stored in the single-use
token's cubbyhole
4. A new response is generated, with the token ID, TTL, and path stored in the
new response's wrap information object
5. The new response is returned to the caller
Note that policies can control minimum/maximum wrapping TTLs; see the [policies
concepts page](/vault/docs/concepts/policies) for
more information.
## Response-Wrapping token validation
Proper validation of response-wrapping tokens is essential to ensure that any
malfeasance is detected. It's also pretty straightforward.
Validation is best performed by the following steps:
1. If a client has been expecting delivery of a response-wrapping token and
none arrives, this may be due to an attacker intercepting the token and then
preventing it from traveling further. This should cause an alert to trigger
an immediate investigation.
2. Perform a lookup on the response-wrapping token. This immediately tells you
if the token has already been unwrapped or is expired (or otherwise
revoked). If the lookup indicates that a token is invalid, it does not
necessarily mean that the data was intercepted (for instance, perhaps the
client took a long time to start up and the TTL expired) but should trigger
an alert for immediate investigation, likely with the assistance of Vault's
audit logs to see if the token really was unwrapped.
3. With the token information in hand, validate that the creation path matches
expectations. If you expect to find a TLS key/certificate inside, chances
are the path should be something like `pki/issue/...`. If the path is not
what you expect, it is possible that the data contained inside was read and
then put into a new response-wrapping token. (This is especially likely if
the path starts with `cubbyhole` or `sys/wrapping/wrap`.) Particular care
should be taken with `kv` secrets engine: exact matches on the path are best
there. For example, if you expect a secret to come from `secret/foo` and
the interceptor provides a token with `secret/bar` as the path, simply
checking for a prefix of `secret/` is not enough.
4. After prefix validation, unwrap the token. If the unwrap fails, the response
is similar to if the initial lookup fails: trigger an alert for immediate
investigation.
Following those steps provides very strong assurance that the data contained
within the response-wrapping token has never been seen by anyone other than the
intended client and that any interception or tampering has resulted in a
security alert. | vault | layout docs page title Response Wrapping description Wrapping responses in cubbyholes for secure distribution Response wrapping Note Some of this information relies on features of response wrapping tokens introduced in Vault 0 8 and may not be available in earlier releases Overview In many Vault deployments clients can access Vault directly and consume returned secrets In other situations it may make sense to or be desired to separate privileges such that one trusted entity is responsible for interacting with most of the Vault API and passing secrets to the end consumer However the more relays a secret travels through the more possibilities for accidental disclosure especially if the secret is being transmitted in plaintext For instance you may wish to get a TLS private key to a machine that has been cold booted but since you do not want to store a decryption key in persistent storage you cannot encrypt this key in transit To help address this problem Vault includes a feature called response wrapping When requested Vault can take the response it would have sent to an HTTP client and instead insert it into the cubbyhole vault docs secrets cubbyhole of a single use token returning that single use token instead Logically speaking the response is wrapped by the token and retrieving it requires an unwrap operation against this token Functionally speaking the token provides authorization to use an encryption key from Vault s keyring to decrypt the data This provides a powerful mechanism for information sharing in many environments In the types of scenarios described above often the best practical option is to provide cover for the secret information be able to detect malfeasance interception tampering and limit lifetime of the secret s exposure Response wrapping performs all three of these duties It provides cover by ensuring that the value being transmitted across the wire is not the actual secret but a reference to such a secret namely the response wrapping token Information stored in logs or captured along the way do not directly see the sensitive information It provides malfeasance detection by ensuring that only a single party can ever unwrap the token and see what s inside A client receiving a token that cannot be unwrapped can trigger an immediate security incident In addition a client can inspect a given token before unwrapping to ensure that its origin is from the expected location in Vault It limits the lifetime of secret exposure because the response wrapping token has a lifetime that is separate from the wrapped secret and often can be much shorter so if a client fails to come up and unwrap the token the token can expire very quickly Response Wrapping tokens When a response is wrapped the normal API response from Vault does not contain the original secret but rather contains a set of information related to the response wrapping token TTL The TTL of the response wrapping token itself Token The actual token value Creation Time The time that the response wrapping token was created Creation Path The API path that was called in the original request Wrapped Accessor If the wrapped response is an authentication response containing a Vault token this is the value of the wrapped token s accessor This is useful for orchestration systems such as Nomad to be able to control the lifetime of secrets based on their knowledge of the lifetime of jobs without having to actually unwrap the response wrapping token or gain knowledge of the token ID inside Vault currently does not provide signed response wrapping tokens as it provides little extra protection If you are being pointed to the correct Vault server token validation is performed by interacting with the server itself a signed token does not remove the need to validate the token with the server since the token is not carrying data but merely an access mechanism and the server will not release data without validating it If you are being attacked and pointed to the wrong Vault server the same attacker could trivially give you the wrong signing public key that corresponds to the wrong Vault server You could cache a previously valid key but could also cache a previously valid address and in most cases the Vault address will not change or will be set via a service discovery mechanism As such we rely on the fact that the token itself is not carrying authoritative data and do not sign it Response Wrapping token operations Via the sys wrapping path several operations can be run against wrapping tokens Lookup sys wrapping lookup This allows fetching the response wrapping token s creation time creation path and TTL This path is unauthenticated and available to response wrapping tokens themselves In other words a response wrapping token holder wishing to perform validation is always allowed to look up the properties of the token Unwrap sys wrapping unwrap Unwrap the token returning the response inside The response that is returned will be the original wire format response it can be used directly with API clients Rewrap sys wrapping rewrap Allows migrating the wrapped data to a new response wrapping token This can be useful for long lived secrets For example an organization may wish or be required in a compliance scenario to have the pki backend s root CA key be returned in a long lived response wrapping token to ensure that nobody has seen the key easily verified by performing lookups on the response wrapping token but available for signing CRLs in case they ever accidentally change or lose the pki mount Often compliance schemes require periodic rotation of secrets so this helps achieve that compliance goal without actually exposing what s inside Wrap sys wrapping wrap A helper endpoint that echoes back the data sent to it in a response wrapping token Note that blocking access to this endpoint does not remove the ability for arbitrary data to be wrapped as it can be done elsewhere in Vault Response Wrapping token creation Response wrapping is per request and is triggered by providing to Vault the desired TTL for a response wrapping token for that request This is set by the client using the X Vault Wrap TTL header and can be either an integer number of seconds or a string duration of seconds 15s minutes 20m or hours 25h When using the Vault CLI you can set this via the wrap ttl parameter When using the Go API wrapping is triggered by setting a helper function https godoc org github com hashicorp vault api Client SetWrappingLookupFunc that tells the API the conditions under which to request wrapping by mapping an operation and path to a desired TTL If a client requests wrapping 1 The original HTTP response is serialized 2 A new single use token is generated with the TTL supplied by the client 3 Internally the original serialized response is stored in the single use token s cubbyhole 4 A new response is generated with the token ID TTL and path stored in the new response s wrap information object 5 The new response is returned to the caller Note that policies can control minimum maximum wrapping TTLs see the policies concepts page vault docs concepts policies for more information Response Wrapping token validation Proper validation of response wrapping tokens is essential to ensure that any malfeasance is detected It s also pretty straightforward Validation is best performed by the following steps 1 If a client has been expecting delivery of a response wrapping token and none arrives this may be due to an attacker intercepting the token and then preventing it from traveling further This should cause an alert to trigger an immediate investigation 2 Perform a lookup on the response wrapping token This immediately tells you if the token has already been unwrapped or is expired or otherwise revoked If the lookup indicates that a token is invalid it does not necessarily mean that the data was intercepted for instance perhaps the client took a long time to start up and the TTL expired but should trigger an alert for immediate investigation likely with the assistance of Vault s audit logs to see if the token really was unwrapped 3 With the token information in hand validate that the creation path matches expectations If you expect to find a TLS key certificate inside chances are the path should be something like pki issue If the path is not what you expect it is possible that the data contained inside was read and then put into a new response wrapping token This is especially likely if the path starts with cubbyhole or sys wrapping wrap Particular care should be taken with kv secrets engine exact matches on the path are best there For example if you expect a secret to come from secret foo and the interceptor provides a token with secret bar as the path simply checking for a prefix of secret is not enough 4 After prefix validation unwrap the token If the unwrap fails the response is similar to if the initial lookup fails trigger an alert for immediate investigation Following those steps provides very strong assurance that the data contained within the response wrapping token has never been seen by anyone other than the intended client and that any interception or tampering has resulted in a security alert |
vault page title Integrated Storage Learn about the integrated raft storage in Vault information As of Vault 1 4 an Integrated Storage option is offered This Integrated storage Vault supports a number of storage options for the durable storage of Vault s layout docs | ---
layout: docs
page_title: Integrated Storage
description: Learn about the integrated raft storage in Vault.
---
# Integrated storage
Vault supports a number of storage options for the durable storage of Vault's
information. As of Vault 1.4 an Integrated Storage option is offered. This
storage backend does not rely on any third party systems, implements high
availability semantics, supports Enterprise Replication features, and provides
backup/restore workflows.
The option stores Vault's data on the server's filesystem and
uses a consensus protocol to replicate data to each server in the cluster. More
information on the internals of Integrated Storage can be found in the
[Integrated Storage internals
documentation](/vault/docs/internals/integrated-storage/). Additionally, the
[Configuration](/vault/docs/configuration/storage/raft/) docs can help in configuring
Vault to use Integrated Storage.
The sections below go into various details on how to operate Vault with
Integrated Storage.
## Server-to-Server communication
Once nodes are joined to one another they begin to communicate using mTLS over
Vault's cluster port. The cluster port defaults to `8201`. The TLS information
is exchanged at join time and is rotated on a cadence.
A requirement for Integrated Storage is that the
[`cluster_addr`](/vault/docs/concepts/ha#per-node-cluster-address) configuration option
is set. This allows Vault to assign an address to the node ID at join time.
## Cluster membership
This section will outline how to bootstrap and manage a cluster of Vault nodes
running Integrated Storage.
Integrated Storage is bootstrapped during the [initialization
process](/vault/tutorials/getting-started/getting-started-deploy#initializing-the-vault),
and results in a cluster of size 1. Depending on the [desired deployment
size](/vault/docs/internals/integrated-storage/#deployment-table), nodes can be joined
to the active Vault node.
### Joining nodes
Joining is the process of taking an uninitialized Vault node and making it a
member of an existing cluster. In order to authenticate the new node to the
cluster it must use the same seal mechanism. If using a Auto Unseal the node
must be configured to use the same KMS provider and Key as the cluster it's
attempting to join. If using a Shamir seal the unseal keys must be provided to
the new node before the join process can complete. Once a node has successfully
joined, data from the active node can begin to replicate to it. Once a node has
been joined it cannot be re-joined to a different cluster.
You can either join the node automatically via the config file or manually through the
API (both methods described below). When joining a node, the API address of the leader node must be used. We
recommend setting the [`api_addr`](/vault/docs/concepts/ha#direct-access) configuration
option on all nodes to make joining simpler.
Always join nodes to a cluster one at a time and wait for the node to become
healthy and (if applicable) a voter before continuing to add more nodes. The
status of a node can be verified by performing a [`list-peers`](/vault/docs/commands/operator/raft#list-peers)
command or by checking the [`autopilot state`](/vault/docs/commands/operator/raft#autopilot-state).
#### `retry_join` configuration
This method enables setting one, or more, target leader nodes in the config file.
When an uninitialized Vault server starts up it will attempt to join each potential
leader that is defined, retrying until successful. When one of the specified
leaders become active this node will successfully join. When using Shamir seal,
the joined nodes will still need to be unsealed manually. When using Auto Unseal
the node will be able to join and unseal automatically.
An example [`retry_join`](/vault/docs/configuration/storage/raft#retry_join-stanza)
config can be seen below:
```hcl
storage "raft" {
path = "/var/raft/"
node_id = "node3"
retry_join {
leader_api_addr = "https://node1.vault.local:8200"
}
retry_join {
leader_api_addr = "https://node2.vault.local:8200"
}
}
```
Note, in each [`retry_join`](/vault/docs/configuration/storage/raft#retry_join-stanza)
stanza, you may provide a single
[`leader_api_addr`](/vault/docs/configuration/storage/raft#leader_api_addr) or
[`auto_join`](/vault/docs/configuration/storage/raft#auto_join) value. When a cloud
[`auto_join`](/vault/docs/configuration/storage/raft#auto_join) configuration value is
provided, Vault will use [go-discover](https://github.com/hashicorp/go-discover)
to automatically attempt to discover and resolve potential Raft leader
addresses.
Check the go-discover
[README](https://github.com/hashicorp/go-discover/blob/master/README.md) for
details on the format of the [`auto_join`](/vault/docs/configuration/storage/raft#auto_join)
value per cloud provider.
```hcl
storage "raft" {
path = "/var/raft/"
node_id = "node3"
retry_join {
auto_join = "provider=aws region=eu-west-1 tag_key=vault tag_value=... access_key_id=... secret_access_key=..."
}
}
```
By default, Vault will attempt to reach discovered peers using HTTPS and port 8200. Operators may override these through the
[`auto_join_scheme`](/vault/docs/configuration/storage/raft#auto_join_scheme) and
[`auto_join_port`](/vault/docs/configuration/storage/raft#auto_join_port) fields
respectively.
```hcl
storage "raft" {
path = "/var/raft/"
node_id = "node3"
retry_join {
auto_join = "provider=aws region=eu-west-1 tag_key=vault tag_value=... access_key_id=... secret_access_key=..."
auto_join_scheme = "http"
auto_join_port = 8201
}
}
```
#### Join from the CLI
Alternatively you can use the [`join` CLI
command](/vault/docs/commands/operator/raft/#join) or the API to join a node. The
active node's API address will need to be specified:
```shell-session
$ vault operator raft join https://node1.vault.local:8200
```
#### Non-Voting nodes (Enterprise only)
Nodes that are joined to a cluster can be specified as non-voters. A non-voting
node has all of Vault's data replicated to it, but does not contribute to the
quorum count. This can be used in conjunction with [Performance
Standby](/vault/docs/enterprise/performance-standby/) nodes to add read scalability to
a cluster in cases where a high volume of reads to servers are needed.
```shell-session
$ vault operator raft join -non-voter https://node1.vault.local:8200
```
### Removing peers
Removing a peer node is a necessary step when you no longer want the node in the
cluster. This could happen if the node is rotated for a new one, the hostname
permanently changes and can no longer be accessed, you're attempting to shrink
the size of the cluster, or for many other reasons. Removing the peer will
ensure the cluster stays at the desired size, and that quorum is maintained.
To remove the peer you can issue a
[`remove-peer`](/vault/docs/commands/operator/raft#remove-peer) command and provide the
node ID you wish to remove:
```shell-session
$ vault operator raft remove-peer node1
Peer removed successfully!
```
#### Re-joining after removal
If you have used `remove-peer` to remove a node from the Raft cluster, but you
later want to have this same node re-join the cluster, you will need to delete
any existing Raft data on the removed node before adding it back to the cluster.
This will involve stopping the Vault process, deleting the data directory containing
Raft data, and then restarting the Vault process.
### Listing peers
To see the current peer set for the cluster you can issue a
[`list-peers`](/vault/docs/commands/operator/raft#list-peers) command. All the voting
nodes that are listed here contribute to the quorum and a majority must be alive
for Integrated Storage to continue to operate.
```shell-session
$ vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
node1 node1.vault.local:8201 follower true
node2 node2.vault.local:8201 follower true
node3 node3.vault.local:8201 leader true
```
## Integrated storage and TLS
We've glossed over some details in the above sections on bootstrapping clusters.
The instructions are sufficient for most cases, but some users have run into
problems when using auto-join and TLS in conjunction with things like auto-scaling.
The issue is that [go-discover](https://github.com/hashicorp/go-discover) on
most platforms returns IPs (not hostnames), and because the IPs aren't knowable
in advance, the TLS certificates used to secure the Vault API port don't contain
these IPs in their IP SANs.
### Vault networking recap
Before we explore solutions to this problem, let's recapitulate how Vault nodes
speak to one another.
Vault exposes two TCP ports: [the API port](/vault/docs/configuration#api_addr) and
[the cluster port](/vault/docs/configuration#cluster_addr).
The API port is where clients send their Vault HTTP requests.
For a single-node Vault cluster you don't worry about a cluster port as it won't be used.
When you have multiple nodes, you also need a cluster port. This is used by Vault
nodes to issue RPCs to one another, e.g. to forward requests from a standby node
to the active node, or when Raft is in use, to handle leader election and
replication of stored data.
The cluster port is secured using a TLS certificate that the Vault active node
generates internally. It's clear how this can work when not using integrated
storage: every node has at least read access to storage, so once the active
node has persisted the certificate, the standby nodes can fetch it, and all
agree on how cluster traffic should be encrypted.
It's less clear how this works with Integrated Storage, as there is a chicken
and egg problem. Nodes don't have a shared view of storage until the raft
cluster has been formed, but we're trying to form the raft cluster! To solve
this problem, a Vault node must speak to another Vault node using the API port
instead of the cluster port. This is currently the only situation in which
Vault Community Edition does this (Vault Enterprise also does something similar when setting
up replication.)
- `node2` wants to join the cluster, so issues challenge API request to existing member `node1`
- `node1` replies to challenge request with (1) an encrypted random UUID and (2) seal config
- `node2` must decrypt UUID using seal; if using auto-unseal can do it directly, if using Shamir must wait for user to provide enough unseal keys to perform decryption
- `node2` sends decrypted UUID back to `node1` using answer API
- `node1` sees `node2` can be trusted (since it has seal access) and replies with a bootstrap package which includes the cluster TLS certificate and private key
- `node2` gets sent a raft snapshot over the cluster port
After this procedure the new node will never again send traffic to the API port.
All subsequent inter-node communication will use the cluster port.

### Assisted raft join techniques
The simplest option is to do it by hand: issue [`raft join`](/vault/docs/commands/operator/raft#join) commands specifying the explicit names
or IPs of the nodes to join to. In this section we look at other TLS-compatible
options that lend themselves more to automation.
#### Autojoin with TLS servername
As of Vault 1.6.2, the simplest option might be to specify a
[`leader_tls_servername`](/vault/docs/configuration/storage/raft#leader_tls_servername)
in the [`retry_join`](/vault/docs/configuration/storage/raft#retry_join-stanza) stanza
which matches a [DNS
SAN](https://en.wikipedia.org/wiki/Subject_Alternative_Name) in the certificate.
Note that names in a certificate's DNS SAN don't actually have to be registered
in a DNS server. Your nodes may have no names found in DNS, while still
using certificate(s) that contain this shared `servername` in their DNS SANs.
#### Autojoin but constrain CIDR, list all possible IPs in certificate
If all the vault node IPs are assigned from a small subnet, e.g. a `/28`, it
becomes practical to put all the IPs that exist in that subnet into the IP SANs
of the TLS certificate the nodes will share.
The drawback here is that the cluster may someday outgrow the CIDR and changing
it may be a pain. For similar reasons this solution may be impractical when
using non-voting nodes and dynamically scaling clusters.
#### Load balancer instead of autojoin
Most Vault instances are going to have a load balancer (LB) between clients and
the Vault nodes. In that case, the LB knows how to route traffic to working
Vault nodes, and there's no need for auto-join: we can just use
[`retry_join`](/vault/docs/configuration/storage/raft#retry_join-stanza) with the LB
address as the target.
One potential issue here: some users want a public facing LB for clients to
connect to Vault, but aren't comfortable with Vault internal traffic
egressing from the internal network it normally runs on.
## Outage recovery
### Quorum maintained
This section outlines the steps to take when a single server or multiple servers
are in a failed state but quorum is still maintained. This means the remaining
alive servers are still operational, can elect a leader, and are able to process
write requests.
If the failed server is recoverable, the best option is to bring it back online
and have it reconnect to the cluster with the same host address. This will return
the cluster to a fully healthy state.
If this is impractical, you need to remove the failed server. Usually, you can
issue a [`remove-peer`](/vault/docs/commands/operator/raft#remove-peer) command to
remove the failed server if it's still a member of the cluster.
If the [`remove-peer`](/vault/docs/commands/operator/raft#remove-peer) command isn't
possible or you'd rather manually re-write the cluster membership a
[`raft/peers.json`](#manual-recovery-using-peers-json) file can be written to
the configured data directory.
### Quorum lost
In the event that multiple servers are lost, causing a loss of quorum and a
complete outage, partial recovery is still possible.
If the failed servers are recoverable, the best option is to bring them back
online and have them reconnect to the cluster using the same host addresses.
This will return the cluster to a fully healthy state.
If the failed servers are not recoverable, partial recovery is possible using
data on the remaining servers in the cluster. There may be data loss in this
situation because multiple servers were lost, so information about what's
committed could be incomplete. The recovery process implicitly commits all
outstanding Raft log entries, so it's also possible to commit data that was
uncommitted before the failure.
See the section below on manual recovery using
[`peers.json`](#manual-recovery-using-peers-json) for details of the recovery
procedure. You include only the remaining servers in the
[`peers.json`](#manual-recovery-using-peers-json) recovery file. The
cluster should be able to elect a leader once the remaining servers are all
restarted with an identical
[`peers.json`](#manual-recovery-using-peers-json) configuration.
Any servers you introduce later can be fresh with totally clean data
directories and joined using Vault's join command.
In extreme cases, it should be possible to recover with just a single remaining
server by starting that single server with itself as the only peer in the
[`peers.json`](#manual-recovery-using-peers-json) recovery file.
### Manual recovery using peers.json
Using `raft/peers.json` for recovery can cause uncommitted Raft log entries to be
implicitly committed, so this should only be used after an outage where no other
option is available to recover a lost server. Make sure you don't have any
automated processes that will put the peers file in place on a periodic basis.
To begin, stop all remaining servers.
The next step is to go to the [configured data
path](/vault/docs/configuration/storage/raft/#path) of each Vault server. Inside that
directory, there will be a `raft/` sub-directory. We need to create a
`raft/peers.json` file. The file should be formatted as a JSON array containing
the node ID, `address:port`, and suffrage information of each Vault server you
wish to be in the cluster:
```json
[
{
"id": "node1",
"address": "node1.vault.local:8201",
"non_voter": false
},
{
"id": "node2",
"address": "node2.vault.local:8201",
"non_voter": false
},
{
"id": "node3",
"address": "node3.vault.local:8201",
"non_voter": false
}
]
```
- `id` `(string: <required>)` - Specifies the node ID of the server. This can be
found in the config file, or inside the `node-id` file in the server's data
directory if it was auto-generated.
- `address` `(string: <required>)` - Specifies the host and port of the server. The
port is the server's cluster port.
- `non_voter` `(bool: <false>)` - This controls whether the server is a non-voter.
If omitted, it will default to false, which is typical for most clusters. This
is an enterprise only feature.
Create entries for all servers. You must confirm that servers you do not
include here have indeed failed and will not later rejoin the cluster. Ensure
that this file is the same across all remaining server nodes.
At this point, you can restart all the remaining servers. The cluster should be
in an operable state again. One of the nodes should claim leadership and become
active.
### Other recovery methods
For other, non-quorum related recovery [Vault's
recovery](/vault/docs/concepts/recovery-mode/) mode can be used. | vault | layout docs page title Integrated Storage description Learn about the integrated raft storage in Vault Integrated storage Vault supports a number of storage options for the durable storage of Vault s information As of Vault 1 4 an Integrated Storage option is offered This storage backend does not rely on any third party systems implements high availability semantics supports Enterprise Replication features and provides backup restore workflows The option stores Vault s data on the server s filesystem and uses a consensus protocol to replicate data to each server in the cluster More information on the internals of Integrated Storage can be found in the Integrated Storage internals documentation vault docs internals integrated storage Additionally the Configuration vault docs configuration storage raft docs can help in configuring Vault to use Integrated Storage The sections below go into various details on how to operate Vault with Integrated Storage Server to Server communication Once nodes are joined to one another they begin to communicate using mTLS over Vault s cluster port The cluster port defaults to 8201 The TLS information is exchanged at join time and is rotated on a cadence A requirement for Integrated Storage is that the cluster addr vault docs concepts ha per node cluster address configuration option is set This allows Vault to assign an address to the node ID at join time Cluster membership This section will outline how to bootstrap and manage a cluster of Vault nodes running Integrated Storage Integrated Storage is bootstrapped during the initialization process vault tutorials getting started getting started deploy initializing the vault and results in a cluster of size 1 Depending on the desired deployment size vault docs internals integrated storage deployment table nodes can be joined to the active Vault node Joining nodes Joining is the process of taking an uninitialized Vault node and making it a member of an existing cluster In order to authenticate the new node to the cluster it must use the same seal mechanism If using a Auto Unseal the node must be configured to use the same KMS provider and Key as the cluster it s attempting to join If using a Shamir seal the unseal keys must be provided to the new node before the join process can complete Once a node has successfully joined data from the active node can begin to replicate to it Once a node has been joined it cannot be re joined to a different cluster You can either join the node automatically via the config file or manually through the API both methods described below When joining a node the API address of the leader node must be used We recommend setting the api addr vault docs concepts ha direct access configuration option on all nodes to make joining simpler Always join nodes to a cluster one at a time and wait for the node to become healthy and if applicable a voter before continuing to add more nodes The status of a node can be verified by performing a list peers vault docs commands operator raft list peers command or by checking the autopilot state vault docs commands operator raft autopilot state retry join configuration This method enables setting one or more target leader nodes in the config file When an uninitialized Vault server starts up it will attempt to join each potential leader that is defined retrying until successful When one of the specified leaders become active this node will successfully join When using Shamir seal the joined nodes will still need to be unsealed manually When using Auto Unseal the node will be able to join and unseal automatically An example retry join vault docs configuration storage raft retry join stanza config can be seen below hcl storage raft path var raft node id node3 retry join leader api addr https node1 vault local 8200 retry join leader api addr https node2 vault local 8200 Note in each retry join vault docs configuration storage raft retry join stanza stanza you may provide a single leader api addr vault docs configuration storage raft leader api addr or auto join vault docs configuration storage raft auto join value When a cloud auto join vault docs configuration storage raft auto join configuration value is provided Vault will use go discover https github com hashicorp go discover to automatically attempt to discover and resolve potential Raft leader addresses Check the go discover README https github com hashicorp go discover blob master README md for details on the format of the auto join vault docs configuration storage raft auto join value per cloud provider hcl storage raft path var raft node id node3 retry join auto join provider aws region eu west 1 tag key vault tag value access key id secret access key By default Vault will attempt to reach discovered peers using HTTPS and port 8200 Operators may override these through the auto join scheme vault docs configuration storage raft auto join scheme and auto join port vault docs configuration storage raft auto join port fields respectively hcl storage raft path var raft node id node3 retry join auto join provider aws region eu west 1 tag key vault tag value access key id secret access key auto join scheme http auto join port 8201 Join from the CLI Alternatively you can use the join CLI command vault docs commands operator raft join or the API to join a node The active node s API address will need to be specified shell session vault operator raft join https node1 vault local 8200 Non Voting nodes Enterprise only Nodes that are joined to a cluster can be specified as non voters A non voting node has all of Vault s data replicated to it but does not contribute to the quorum count This can be used in conjunction with Performance Standby vault docs enterprise performance standby nodes to add read scalability to a cluster in cases where a high volume of reads to servers are needed shell session vault operator raft join non voter https node1 vault local 8200 Removing peers Removing a peer node is a necessary step when you no longer want the node in the cluster This could happen if the node is rotated for a new one the hostname permanently changes and can no longer be accessed you re attempting to shrink the size of the cluster or for many other reasons Removing the peer will ensure the cluster stays at the desired size and that quorum is maintained To remove the peer you can issue a remove peer vault docs commands operator raft remove peer command and provide the node ID you wish to remove shell session vault operator raft remove peer node1 Peer removed successfully Re joining after removal If you have used remove peer to remove a node from the Raft cluster but you later want to have this same node re join the cluster you will need to delete any existing Raft data on the removed node before adding it back to the cluster This will involve stopping the Vault process deleting the data directory containing Raft data and then restarting the Vault process Listing peers To see the current peer set for the cluster you can issue a list peers vault docs commands operator raft list peers command All the voting nodes that are listed here contribute to the quorum and a majority must be alive for Integrated Storage to continue to operate shell session vault operator raft list peers Node Address State Voter node1 node1 vault local 8201 follower true node2 node2 vault local 8201 follower true node3 node3 vault local 8201 leader true Integrated storage and TLS We ve glossed over some details in the above sections on bootstrapping clusters The instructions are sufficient for most cases but some users have run into problems when using auto join and TLS in conjunction with things like auto scaling The issue is that go discover https github com hashicorp go discover on most platforms returns IPs not hostnames and because the IPs aren t knowable in advance the TLS certificates used to secure the Vault API port don t contain these IPs in their IP SANs Vault networking recap Before we explore solutions to this problem let s recapitulate how Vault nodes speak to one another Vault exposes two TCP ports the API port vault docs configuration api addr and the cluster port vault docs configuration cluster addr The API port is where clients send their Vault HTTP requests For a single node Vault cluster you don t worry about a cluster port as it won t be used When you have multiple nodes you also need a cluster port This is used by Vault nodes to issue RPCs to one another e g to forward requests from a standby node to the active node or when Raft is in use to handle leader election and replication of stored data The cluster port is secured using a TLS certificate that the Vault active node generates internally It s clear how this can work when not using integrated storage every node has at least read access to storage so once the active node has persisted the certificate the standby nodes can fetch it and all agree on how cluster traffic should be encrypted It s less clear how this works with Integrated Storage as there is a chicken and egg problem Nodes don t have a shared view of storage until the raft cluster has been formed but we re trying to form the raft cluster To solve this problem a Vault node must speak to another Vault node using the API port instead of the cluster port This is currently the only situation in which Vault Community Edition does this Vault Enterprise also does something similar when setting up replication node2 wants to join the cluster so issues challenge API request to existing member node1 node1 replies to challenge request with 1 an encrypted random UUID and 2 seal config node2 must decrypt UUID using seal if using auto unseal can do it directly if using Shamir must wait for user to provide enough unseal keys to perform decryption node2 sends decrypted UUID back to node1 using answer API node1 sees node2 can be trusted since it has seal access and replies with a bootstrap package which includes the cluster TLS certificate and private key node2 gets sent a raft snapshot over the cluster port After this procedure the new node will never again send traffic to the API port All subsequent inter node communication will use the cluster port Raft Join Process img raft join detailed png Assisted raft join techniques The simplest option is to do it by hand issue raft join vault docs commands operator raft join commands specifying the explicit names or IPs of the nodes to join to In this section we look at other TLS compatible options that lend themselves more to automation Autojoin with TLS servername As of Vault 1 6 2 the simplest option might be to specify a leader tls servername vault docs configuration storage raft leader tls servername in the retry join vault docs configuration storage raft retry join stanza stanza which matches a DNS SAN https en wikipedia org wiki Subject Alternative Name in the certificate Note that names in a certificate s DNS SAN don t actually have to be registered in a DNS server Your nodes may have no names found in DNS while still using certificate s that contain this shared servername in their DNS SANs Autojoin but constrain CIDR list all possible IPs in certificate If all the vault node IPs are assigned from a small subnet e g a 28 it becomes practical to put all the IPs that exist in that subnet into the IP SANs of the TLS certificate the nodes will share The drawback here is that the cluster may someday outgrow the CIDR and changing it may be a pain For similar reasons this solution may be impractical when using non voting nodes and dynamically scaling clusters Load balancer instead of autojoin Most Vault instances are going to have a load balancer LB between clients and the Vault nodes In that case the LB knows how to route traffic to working Vault nodes and there s no need for auto join we can just use retry join vault docs configuration storage raft retry join stanza with the LB address as the target One potential issue here some users want a public facing LB for clients to connect to Vault but aren t comfortable with Vault internal traffic egressing from the internal network it normally runs on Outage recovery Quorum maintained This section outlines the steps to take when a single server or multiple servers are in a failed state but quorum is still maintained This means the remaining alive servers are still operational can elect a leader and are able to process write requests If the failed server is recoverable the best option is to bring it back online and have it reconnect to the cluster with the same host address This will return the cluster to a fully healthy state If this is impractical you need to remove the failed server Usually you can issue a remove peer vault docs commands operator raft remove peer command to remove the failed server if it s still a member of the cluster If the remove peer vault docs commands operator raft remove peer command isn t possible or you d rather manually re write the cluster membership a raft peers json manual recovery using peers json file can be written to the configured data directory Quorum lost In the event that multiple servers are lost causing a loss of quorum and a complete outage partial recovery is still possible If the failed servers are recoverable the best option is to bring them back online and have them reconnect to the cluster using the same host addresses This will return the cluster to a fully healthy state If the failed servers are not recoverable partial recovery is possible using data on the remaining servers in the cluster There may be data loss in this situation because multiple servers were lost so information about what s committed could be incomplete The recovery process implicitly commits all outstanding Raft log entries so it s also possible to commit data that was uncommitted before the failure See the section below on manual recovery using peers json manual recovery using peers json for details of the recovery procedure You include only the remaining servers in the peers json manual recovery using peers json recovery file The cluster should be able to elect a leader once the remaining servers are all restarted with an identical peers json manual recovery using peers json configuration Any servers you introduce later can be fresh with totally clean data directories and joined using Vault s join command In extreme cases it should be possible to recover with just a single remaining server by starting that single server with itself as the only peer in the peers json manual recovery using peers json recovery file Manual recovery using peers json Using raft peers json for recovery can cause uncommitted Raft log entries to be implicitly committed so this should only be used after an outage where no other option is available to recover a lost server Make sure you don t have any automated processes that will put the peers file in place on a periodic basis To begin stop all remaining servers The next step is to go to the configured data path vault docs configuration storage raft path of each Vault server Inside that directory there will be a raft sub directory We need to create a raft peers json file The file should be formatted as a JSON array containing the node ID address port and suffrage information of each Vault server you wish to be in the cluster json id node1 address node1 vault local 8201 non voter false id node2 address node2 vault local 8201 non voter false id node3 address node3 vault local 8201 non voter false id string required Specifies the node ID of the server This can be found in the config file or inside the node id file in the server s data directory if it was auto generated address string required Specifies the host and port of the server The port is the server s cluster port non voter bool false This controls whether the server is a non voter If omitted it will default to false which is typical for most clusters This is an enterprise only feature Create entries for all servers You must confirm that servers you do not include here have indeed failed and will not later rejoin the cluster Ensure that this file is the same across all remaining server nodes At this point you can restart all the remaining servers The cluster should be in an operable state again One of the nodes should claim leadership and become active Other recovery methods For other non quorum related recovery Vault s recovery vault docs concepts recovery mode mode can be used |
vault page title Migration checklist Migration checklist Tip title This is a decision making checklist layout docs Use this checklist for decision making related to migrating your Vault deployment to Integrated Storage | ---
layout: docs
page_title: Migration checklist
description: Use this checklist for decision making related to migrating your Vault deployment to Integrated Storage.
---
# Migration checklist
<Tip title="This is a decision-making checklist">
The purpose of this checklist is not to walk you through the storage
migration steps. This content provides a quick self-check whether it is your
best interest to migrate your Vault storage from an external system to
Integrated Storage.
</Tip>
## Who should use this checklist?
Integrated Storage is a recommended storage option, made available in
Vault 1.4. Vault continues to also support other storage solutions
like Consul.
You should use this checklist if you are operating a Vault deployment backed
by external storage like Consul, and you are considering migration to
Integrated Storage.
## Understand architectural differences
It is important that you understand the differences between operating Vault
with external storage and operating with Integrated Storage. The following
sections detail key differences in architecture between Vault with Consul
storage, and Vault with Integrated Storage to help inform your decision.
### Reference architecture with Consul
The recommended number of Vault instances is **3** in a cluster which connects
to a Consul cluster which may have **5** or more nodes as shown in the diagram.
A total of 8 virtual machines hosts this Vault highly available architecture.
<ImageConfig hideBorder>

</ImageConfig>
The processing requirements depend on the encryption and messaging workloads.
Memory requirements are dependant on the total size of secrets stored in
memory. The Vault server itself has minimal storage requirements, but
the Consul nodes should have a high-performance physical storage system.
### Reference architecture with Integrated Storage
The recommended number of Vault instances is **5** in a cluster. In a single HA
cluster, all Vault nodes share the data while an active node holds the lock;
therefore, only the active node has write access. To achieve n-2 redundancy,
(meaning that the cluster can still function after losing 2 nodes),
an ideal size for a Vault HA cluster is 5 nodes.
<Tip title="More deployment details in the documentation">
Refer to the [Integrated
Storage](/vault/docs/internals/integrated-storage#deployment-table)
documentation for more deployment details.
</Tip>
<ImageConfig hideBorder>

</ImageConfig>
Because the data gets persisted on the same host, the Vault server should be
hosted on a relatively high-performance hard disk system.
## Consul vs. Integrated Storage
The Integrated Storage eliminates the need for external storage; therefore,
Vault is the only software you need to stand up a cluster. This indicates that
the host machine must have disk capacity in an amount equal or
greater to that of the existing external storage backend.
### System requirements comparison
The fundamental difference between Vault's Integrated Storage and Consul is
that the Integrated Storage stores everything on disk while [Consul
KV](/consul/docs/dynamic-app-config/kv) stores everything in its memory
which impacts the host's RAM.
#### Machine sizes for Vault - Consul as its storage backend
It is recommended to avoid hosting Consul on an instance with burstable CPU.
| Size | CPU | Memory | Disk | Typical Cloud Instance Types |
| ----- | -------- | ------------ | ----- | ----------------------------------------- |
| Small | 2 core | 4-8 GB RAM | 25 GB | **AWS:** m5.large |
| | | | | **Azure:** Standard_D2_v3 |
| | | | | **GCE:** n1-standard-2, n1-standard-4 |
| Large | 4-8 core | 16-32 GB RAM | 50 GB | **AWS:** m5.xlarge, m5.2xlarge |
| | | | | **Azure:** Standard_D4_v3, Standard_D8_v3 |
| | | | | **GCE:** n1-standard-8, n1-standard-16 |
#### Machine sizes for Vault with Integrated Storage
| Size | CPU | Memory | Disk | Typical Cloud Instance Types |
| ----- | -------- | ------------ | ------ | ------------------------------------------ |
| Small | 2 core | 8-16 GB RAM | 100 GB | **AWS:** m5.large, m5.xlarge |
| | | | | **Azure:** Standard_D2_v3, Standard_D4_v3 |
| | | | | **GCE:** n2-standard-2, n2-standard-4 |
| Large | 4-8 core | 32-64 GB RAM | 200 GB | **AWS:** m5.2xlarge, m5.4xlarge |
| | | | | **Azure:** Standard_D8_v3, Standard_D16_v3 |
| | | | | **GCE:** n2-standard-8, n2-standard-16 |
If many secrets are being generated or rotated frequently, this information will
need to be flushed to the disk often. Therefore, the infrastructure should have
a relatively high-performance hard disk system when using the integrated
storage.
<Note title="A note about the importance of IOPS">
Vault's Integrated Storage is disk-bound; therefore, care should be taken when planning storage volume size and performance. For cloud providers, IOPS can be dependent on volume size and/or provisioned IOPS. It is recommended to provision IOPS and avoid burstable IOPS. Monitoring of IOPS performance should be implemented in order to tune the storage volume to the IOPS load.
</Note>
### Performance considerations
Because Consul KV is memory-bound, it is necessary to take a snapshot frequently.
However, Vault's Integrated Storage persists everything on the disk which eliminates
the need for such frequent snapshot operations. Take snapshots to back up the data
so that you can restore them in case of data loss. This reduces the performance cost
introduced by the frequent snapshot operations.
In considering disk performance, since Vault data changes are immediately written to disk,
rather than in batched snapshots as Consul does, it is important to monitor IOPS as well
as disk queues to limit storage bottlenecks.
### Inspect Vault data
Inspection of Vault data differs considerably from the `consul kv` commands used
to inspect Consul's KV store.
Consult the [Inspect Data in Integrated Storage](/vault/tutorials/monitoring/inspect-data-integrated-storage)
tutorial to learn more about querying Integrated Storage data.
### Summary
The table below highlights the differences between Consul and integrated
storage.
| Consideration | Consul as storage backend | Vault Integrated Storage |
| ------------------- | -------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- |
| System requirement | Memory optimized machine | Storage optimized high IOPS machine |
| Data snapshot | Frequent snapshots | Normal data backup strategy |
| Snapshot automation | Snapshot agent (**Consul Enterprise only**) | Automatic snapshot (**Vault Enterprise v1.6.0 and later**) |
| Data inspection | [Online, use `consul kv` command](/vault/tutorials/monitoring/inspecting-data-consul) | [Offline, requires using recovery mode](/vault/tutorials/monitoring/inspect-data-integrated-storage) |
| Autopilot | Supported | Supported (**Vault 1.7.0 and later**) |
## Self-check questions
- [ ] Where is the product expertise?
- [ ] Do you already have Consul expertise?
- [ ] Are you concerned about lack of Consul knowledge?
- [ ] Do you experience any technical issues with Consul?
- [ ] What motivates the data migration from the current storage to Integrated Storage?
- [ ] Reduce the operational overhead?
- [ ] Reduce the number of machines to run?
- [ ] Reduce the cloud infrastructure cost?
- [ ] Do you have a staging environment where you can run production loads and verify that everything works as you expect?
- [ ] Have you thought through the storage backup process or workflow after migrating to the Integrated Storage?
- [ ] Do you currently rely heavily on using Consul to inspect Vault data?
## Tutorials
If you are ready to migrate the current storage backend to Integrated Storage,
refer to the [Storage Migration Tutorial - Consul to Integrated Storage](/vault/tutorials/raft/raft-migration).
To deploy a new cluster with Integrated Storage, refer to the [Vault HA Cluster
with Integrated Storage](/vault/tutorials/raft/raft-storage) tutorial. | vault | layout docs page title Migration checklist description Use this checklist for decision making related to migrating your Vault deployment to Integrated Storage Migration checklist Tip title This is a decision making checklist The purpose of this checklist is not to walk you through the storage migration steps This content provides a quick self check whether it is your best interest to migrate your Vault storage from an external system to Integrated Storage Tip Who should use this checklist Integrated Storage is a recommended storage option made available in Vault 1 4 Vault continues to also support other storage solutions like Consul You should use this checklist if you are operating a Vault deployment backed by external storage like Consul and you are considering migration to Integrated Storage Understand architectural differences It is important that you understand the differences between operating Vault with external storage and operating with Integrated Storage The following sections detail key differences in architecture between Vault with Consul storage and Vault with Integrated Storage to help inform your decision Reference architecture with Consul The recommended number of Vault instances is 3 in a cluster which connects to a Consul cluster which may have 5 or more nodes as shown in the diagram A total of 8 virtual machines hosts this Vault highly available architecture ImageConfig hideBorder Reference Diagram img diagram vault ra 3 az png ImageConfig The processing requirements depend on the encryption and messaging workloads Memory requirements are dependant on the total size of secrets stored in memory The Vault server itself has minimal storage requirements but the Consul nodes should have a high performance physical storage system Reference architecture with Integrated Storage The recommended number of Vault instances is 5 in a cluster In a single HA cluster all Vault nodes share the data while an active node holds the lock therefore only the active node has write access To achieve n 2 redundancy meaning that the cluster can still function after losing 2 nodes an ideal size for a Vault HA cluster is 5 nodes Tip title More deployment details in the documentation Refer to the Integrated Storage vault docs internals integrated storage deployment table documentation for more deployment details Tip ImageConfig hideBorder Reference Diagram Details img diagram vault integrated ra 3 az png ImageConfig Because the data gets persisted on the same host the Vault server should be hosted on a relatively high performance hard disk system Consul vs Integrated Storage The Integrated Storage eliminates the need for external storage therefore Vault is the only software you need to stand up a cluster This indicates that the host machine must have disk capacity in an amount equal or greater to that of the existing external storage backend System requirements comparison The fundamental difference between Vault s Integrated Storage and Consul is that the Integrated Storage stores everything on disk while Consul KV consul docs dynamic app config kv stores everything in its memory which impacts the host s RAM Machine sizes for Vault Consul as its storage backend It is recommended to avoid hosting Consul on an instance with burstable CPU Size CPU Memory Disk Typical Cloud Instance Types Small 2 core 4 8 GB RAM 25 GB AWS m5 large Azure Standard D2 v3 GCE n1 standard 2 n1 standard 4 Large 4 8 core 16 32 GB RAM 50 GB AWS m5 xlarge m5 2xlarge Azure Standard D4 v3 Standard D8 v3 GCE n1 standard 8 n1 standard 16 Machine sizes for Vault with Integrated Storage Size CPU Memory Disk Typical Cloud Instance Types Small 2 core 8 16 GB RAM 100 GB AWS m5 large m5 xlarge Azure Standard D2 v3 Standard D4 v3 GCE n2 standard 2 n2 standard 4 Large 4 8 core 32 64 GB RAM 200 GB AWS m5 2xlarge m5 4xlarge Azure Standard D8 v3 Standard D16 v3 GCE n2 standard 8 n2 standard 16 If many secrets are being generated or rotated frequently this information will need to be flushed to the disk often Therefore the infrastructure should have a relatively high performance hard disk system when using the integrated storage Note title A note about the importance of IOPS Vault s Integrated Storage is disk bound therefore care should be taken when planning storage volume size and performance For cloud providers IOPS can be dependent on volume size and or provisioned IOPS It is recommended to provision IOPS and avoid burstable IOPS Monitoring of IOPS performance should be implemented in order to tune the storage volume to the IOPS load Note Performance considerations Because Consul KV is memory bound it is necessary to take a snapshot frequently However Vault s Integrated Storage persists everything on the disk which eliminates the need for such frequent snapshot operations Take snapshots to back up the data so that you can restore them in case of data loss This reduces the performance cost introduced by the frequent snapshot operations In considering disk performance since Vault data changes are immediately written to disk rather than in batched snapshots as Consul does it is important to monitor IOPS as well as disk queues to limit storage bottlenecks Inspect Vault data Inspection of Vault data differs considerably from the consul kv commands used to inspect Consul s KV store Consult the Inspect Data in Integrated Storage vault tutorials monitoring inspect data integrated storage tutorial to learn more about querying Integrated Storage data Summary The table below highlights the differences between Consul and integrated storage Consideration Consul as storage backend Vault Integrated Storage System requirement Memory optimized machine Storage optimized high IOPS machine Data snapshot Frequent snapshots Normal data backup strategy Snapshot automation Snapshot agent Consul Enterprise only Automatic snapshot Vault Enterprise v1 6 0 and later Data inspection Online use consul kv command vault tutorials monitoring inspecting data consul Offline requires using recovery mode vault tutorials monitoring inspect data integrated storage Autopilot Supported Supported Vault 1 7 0 and later Self check questions Where is the product expertise Do you already have Consul expertise Are you concerned about lack of Consul knowledge Do you experience any technical issues with Consul What motivates the data migration from the current storage to Integrated Storage Reduce the operational overhead Reduce the number of machines to run Reduce the cloud infrastructure cost Do you have a staging environment where you can run production loads and verify that everything works as you expect Have you thought through the storage backup process or workflow after migrating to the Integrated Storage Do you currently rely heavily on using Consul to inspect Vault data Tutorials If you are ready to migrate the current storage backend to Integrated Storage refer to the Storage Migration Tutorial Consul to Integrated Storage vault tutorials raft raft migration To deploy a new cluster with Integrated Storage refer to the Vault HA Cluster with Integrated Storage vault tutorials raft raft storage tutorial |
Subsets and Splits